Adele Wallrich talks recruiting and delighting UX research participants

Airtable's research ops specialist, Adele Wallrich

Recruiting participants for UX research is always a challenge, even at a well-established company like Airtable.

  • The company primarily interviews users to understand their product needs, usage, or experience. And that means finding niche subsets of users.

  • There’s not always easy data or proxies to identify users with specific behaviors or attributes.

  • And Airtable doesn’t have a panel. They go to the well each time to recruit. 

“That becomes really tough for recruitment, because then we have to do a lot manual work where we’re talking to CSMs trying to gain contextual knowledge from people that manage the accounts,” said Adele Wallrich, research operations manager at Airtable.

A key risk is oversampling, and Adele found a clever proxy for mitigating that risk: the $600 tax threshold.

In this week’s episode of “What’s in it for them?,” Adele discusses:

  • Her strategy for delighting participants by delivering incentives – or as Airtable calls them, ‘Thank-you gifts’ – as quickly and seamlessly as possible

  • The automation she built to ensure research participants don’t hit the $600 tax threshold

  • How that threshold can be used to ensure user input includes a diverse swath of the customer base

These considerations keep customers happy throughout the UX research process, and help Airtable build product features that appeal to a wide variety of users. 

In this episode, Adele talks about an automation she built in Airtable that automates aspects of the UX research participant recruitment and incentivization process. If you have an Airtable account, you can use it yourself here.

The following is an abridged version of our discussion.

What’s Airtable’s UX research team trying to learn?

Ian Floyd: Can you tell us a little bit about what kind of research Airtable is doing? Who are your participants? I assume it's your users, but maybe you're talking to the general public, too. So talk us through a little bit about the research that you're doing and what that looks like. 

Adele Wallrich: It's pretty focused on our current users. And then, sometimes, we do internal interviews within Airtable as a starting point to get a better idea – or, more clarity on certain topics.

But it does tend to be geared towards people already on our platform, because we're focused on their experience of the product. And we really want – I mean, again, our mission is to democratize software creation – so, we really need to understand what's happening from the user's point of view. 

And our users have so many different types of experiences and backgrounds, so it's really important to get their perspective – whether they’re an engineer who knows how to code, and they're managing a bunch of Jira integrations or things like that within Airtable – or if they're non-technical person and they just, of their own volition, decide to build out some type of workflow with automations for their team. 

So yes, it's very geared towards current users to understand how we can continue making things lower-lift for them, and not make Airtable too daunting of a product to jump into.

Ian Floyd: Cool. And are y'all doing surveys, interviews? 

Adele Wallrich: Oh, yes, we are. Sometimes we do surveys, not as much. We used to have a quant researcher, but they were unfortunately laid off last December. So we are very focused on qual interviews most of the time, and we're moving more into doing unmoderated things in UserZoom to get prototypes out there and be able to have these recordings to see people actually interacting with prototypes and things like that.

So, typically moderated remote sessions. But we're moving more towards doing unmoderated tests with people as well.

The reality of research ops

Kate Monica: What would you say is kind of the most arduous part of the research ops function? 

Adele Wallrich: Recruitment is a constant challenge for anybody at any company, and for us, it can be particularly challenging. Because there are not always easy data proxies – or just data in general – around users where we can identify certain attributes about them. 

Sometimes it can be really simple, where we want to talk to people who have joined in the last six months that have used automations a certain number of times. And we can look that up in Mode, or in our internal databases.

Then sometimes, we're looking for things that are a bit more nuanced. We're looking for an admin who isn't actually in the tool a lot, but just provisions accounts, and maybe they've run into issues with billing or things like that. And so, there's not always a very clear link between certain behaviors that we want to dig into, and then data on the backend that we can query on.

So that becomes really tough for recruitment, because then we have to do a lot of manual work where we're talking to [customer success managers] and trying to gain contextual knowledge from people that manage the accounts. 

And that is obviously: 1.) hard to access, because it lives only in people's brains. And then 2.) it's hard to operationalize, because it's only in people's brains.

So there's not a source of truth where we can say, oh, talk to the Netflix [customer success manager], and see if they know any of these people. You have to also translate all the research jargon into something that an account manager can understand. 

So there's just a lot of very heavy-lift, individualized efforts that a researcher and I have to partner on to be able to collaborate internally and identify the right participants.

Recruiting & incentivizing the right research participants

Ian Floyd: Do you have a panel of people who have raised their hands in terms of your client base – your participants – or are you usually going to the well each time to get new participants?

Adele Wallrich: We go to the well each time we've thought about having a panel, but that's something that hasn't been a great fit for our needs at the moment. 

It's something we're looking into now that our team is a bit smaller, but at the moment our entire database of users is accessible to us, which is nice because that's built into the terms and conditions of our product.

Ian Floyd: So can you tell us how you go about incentivizing those people, and how you landed on that strategy or approach? 

Adele Wallrich: It’s really based around one of our company values, which is customer-first. And part of that is, for one: we actually internally refer to incentives as ‘Thank-you gifts’, because we don't even want to necessarily think of it as compensation or an incentive.

It's sort of an exchange. We really want to make it about gratitude and fostering that relationship with users. And we also just want to make sure they're having a good time with us, whether it's actually using our product or engaging with us. And research is a pretty heavy-lift engagement, so we just want to show them gratitude for volunteering their time, because they really don't have to.

So it means a lot that they take time out of their day to be able to give such feedback to us. It's not just, “do you like this?” Like, we're really trying to get at what's happening in their brains and obviously, it's not always straightforward, so you have to ask a lot of follow-ups. And these are very intensive conversations.

It’s all about showing our gratitude and appreciation for people sharing their time, but also using our product at all. It's all around customer delight, which has also been a cornerstone of Airtable for a while. 

We always want our product to inspire delight. So similarly, we want our research and every engagement – every touchpoint with them – to inspire delight.

Delighting customers across every touchpoint

Kate Monica: How do you structure the incentivization process such that it does inspire delight, and it's not a hassle for you and the recipient? 

Adele Wallrich: That’s a great question, because back in our early days, it was a hassle internally.

We didn't have a Tremendous or a Rybbon type of tool. We had to manually send Amazon gift cards, which was really tough. It was probably mostly fine for the participants,  because they were getting an Amazon card, but sometimes it would get finicky if they had support questions.

So that was always a challenge. Now it's all about turnaround, in my opinion. Just getting it out to them as quickly as possible. 

Ian Floyd: How much do you offer for interviews? How do you toggle that up or down, depending on if they're more senior or more hard to find? 

And then, since you're going to the well each time, I am sure you run into the $600 tax threshold, right? And so how do you go about actually tracking that?

Adele Wallrich: Two great questions that were the center of a lot of my work when I first stepped into this role. 

So, for the first one: what’s the actual structuring of how we decide amounts? We have a general structure where most of our qual interviews are 60 minutes, and so we pay a hundred dollars.

We have a ‘thank-you’ gift that is worth a hundred dollars for 60 minutes, and we basically anchor on that. 

So, if an interview is 30 minutes – which they sometimes are – we'll do 50 bucks. And then 90 will be $125. And that basically covers the bulk of what we do. 

For surveys, it's a little bit different, but now that we don't run as many surveys without a quant team internally, that is something that we kind of design bespoke to the survey. 

Sometimes we do sweepstakes, and then sometimes we don't. We usually do pay per response, but we do have it in our back pocket. With surveys, it's very bespoke depending on the length of the survey and the nature of it and things like that. 

And then as far as tracking goes, that's why it's great to have tools like BHN or Tremendous. Because if you're just sending Amazon gift cards or something like that, the tracking is terrible in Amazon. And there's no easy way to search by name or by a recipient ID – or amount, even. 

Automating with Airtable

Ian Floyd: How do you track that? And can you talk a little bit about your tech stack that you use?

Adele Wallrich: So with the tech stack, it's basically just in Airtable, which is really fun for me, because that's something I had to create on my own. 

We also screen users in Airtable. So, we basically joined the BHN rewards data with screener data. Once people fill out that screener, it comes into an Airtable base. Each research study that we run has its own Airtable base that we set up at the beginning, and that's where we put in all the criteria – all the different filters.

So as screener responses come in, we can immediately have it go into certain views where we filtered, for example: these companies are this FTE size, this company is this industry, things like that. 

Ian Floyd: Okay. So now that you have this automation, what is Airtable able to do? 

Adele Wallrich: So, the researcher can see ahead of time, for example: this person's only gotten a hundred bucks from us, and that was three months ago, so we can go ahead and invite them again.

Or: this person has gotten 400 bucks. Maybe we should avoid inviting them this time around. 

So, that is what we rely on at the moment, which is really nice because it's a bit earlier in the process. Usually the tricky part of all this is that you really want to check the amount before you even bother inviting the participant, because it's not a great customer experience to get invited and then have that rescinded.

Because actually sending the incentive is kind of a later part of the research process, it's really important to find a way at the very beginning to be cross-referencing that data from the jump.

Ian Floyd: Yeah, it sounds like you kind of had to create a bunch of automations on your own so that you can track all that.

Adele Wallrich: Yes. Took a lot of iteration and phases to get to its final state. 

Avoiding the $600 tax threshold

Kate Monica: So you created this automation in Airtable. Is it something that would be available to other researchers who are concerned about avoiding the $600 threshold? 

Adele Wallrich: Yes, it is available.

It just hinges upon you also having your screener in Airtable. So, if you're screening in Qualtrics or some other tool, it might not work as well. I think there's probably ways you can get around it. But the most seamless way would be to copy the template I have. And of course, the other pre-req is that you need to have an Airtable account to log in and duplicate the automation.

So yes, you can technically duplicate the template I built and configure this language to your own internal settings, but use the same workflow that's already programmed into the automation. 

Ian Floyd: And quick plug: Tremendous does track W-9s for you. 

Adele Wallrich: Yes. That is a very huge perk.

Kate Monica: Essentially, the reason that you're working so hard to avoid the $600 threshold is basically because that would place more burden on the recipient. They'd have to do some extra steps.

Adele Wallrich: Yes. So it's not the end of the world, it's just annoying that then it has to be reported on your taxes.

It's extra paperwork for us. It's extra paperwork for them. And then it's also just uncomfortable to reach out after the fact and be like, ‘hey, you participated in this thing with us. We need you to fill out this 1099.’ 

And – I've seen this happen on recruiting teams and other circumstances where they've had a really hard time just even getting ahold of the participant.

It's kind of mandatory that they respond, but obviously you cannot force any human being to do things if they're not replying. So you just want to avoid that and remain compliant.

Why would you want to mess with the IRS? There's no point. 

Ian Floyd: How prevalent of an issue is this? What percentage of your participants are actually crossing that threshold, do you know? 

Adele Wallrich: It's honestly not that high. And the reason we're really focused on it is because we have a smaller pool of users than a Facebook or an Amazon might, for example. 

So the likelihood that we're inviting the same people is a bit higher. But luckily, as time goes on and we get more users, it kind of decreases, and our studies don't always go for the same segments of people.

So it's honestly not that big of an issue, but it's just something we always want to be cognizant of, because there's always concurrent research happening quarter after quarter, and things can add up really quickly if we're really focused on any particular group for whatever company focus.

Like right now, there's a big push to talk to IT admins. That means a lot of people are going after a very small population. So, it's something we just have to be mindful of. 

But that's not always what's happening. We don't always target our IT admin, so it's kind of seasonal as well.

Fostering a diverse pool of participants

Ian Floyd: Do you think it's an indicator of potential fatigue for the participant? Like, if they pass that $600 threshold, maybe you've been contacting them too much? 

Adele Wallrich: Yes. There's the non-financial implications too, which is the fatigue, and then also just over-indexing on a specific segment or person’s feedback.

Like you really want to make sure you're capturing a diverse and comprehensive set of thoughts from your users. So, when you do eventually make product changes, it's not benefiting only one group of users. 

We're trying to make the product as usable as possible for everybody across the board. So keeping track of the $600 limit is also a great check and balance for us to make sure we're recruiting from different types of people as well.

Tremendous automates W-9 collection, so if you do hit the $600 tax threshold, it's less of a pain. We make every part of sending research incentives a lot easier. See for yourself.

Share this article

Facebook
Twitter
LinkedIn