The ultimate guide to screener surveys
User experience research can be a powerful tool to help you learn about your users and craft strategies to serve them better, but it’s most effective when you have the right kind of participants.
Most often, that means we’re looking for actual users or people who represent the kind of people we’re trying to learn about.
So how to make sure you get the right participants? With help from well-crafted screener surveys.
What are screeners?
A screener is a particular type of survey that we use to help us identify who we do and do not want to include in our research efforts.
You might hear it called a screener survey, a screener, or just screening questions, but all of those things are interchangeable and refer to sets of questions you ask potential participants to see if they are the right fit for your study.
Most often, researchers will create screening questions and determine specific participant quotas as a part of the planning process, and then choose how to launch the screener depending on the recruitment method.
If you’ll be using a recruitment vendor, they’ll likely take the questions you write and screen in their own way.
If you’re using an online recruiting tool, you’re likely going to need to set up the screening questions yourself within the project set up, or you can use a traditional survey tool to publicize if you’ll be recruiting yourself.
Why are screener surveys important?
Screeners are important because there are lots of recruiting options to help us find potential research participants, and you will almost always want very specific people included or excluded based on your research goals.
You may also want to include a particular mix of types of users, even if you don’t have strict quotas.
For instance, let’s say that you are working with a gaming company that offers single-purchase and subscription pricing tiers, and your research goal is to understand more about who is choosing each pricing tier and why so you can try to increase the number of subscription purchases. As you’re planning the best method, your team decides to run one-on-one interviews with people who have chosen each of the different tiers within the last month.
They want to include at least some participants who purchased a subscription after previously paying for a single use.
Even if you receive a list of contact information for customers who have opted-in to provide feedback, you may not have all the information about their past purchases, so your screener will be focused on identifying what sort of previous purchases folks have made and when.
You can also use your screener to identify and exclude folks who will not be good participants for your study.
Of course, you want participants to be eligible based on the criteria you set (i.e., has made a recent purchase), but there may also be things you are not interested in hearing about.
Thinking of the example above, maybe your team has determined there is a group of users who only make purchases when there are deep-discount flash sales, and they aren’t who you want to target.
You could decide that whenever anyone indicates they’ve made one of these purchases, they won’t be included in your study. Again, think back to your original research goals and think through who exactly you do and do not want to talk to.
Screeners can also help you find and weed out “serial participants” who try to make it into every potential study and aren’t truly qualified. More on that in the next section.
Identifying your screener questions
As with everything in UX research, an effective screener starts with clear research goals and making decisions based on what would best help you answer your questions.
If you already have personas and you want to include representative users, you can use the characteristics of your personas as the basis for screening questions.
If not, you can still start to think through your target participants in the same way that you think about personas: consider what elements make them unique, what motivations drive their actions, and what behaviors delineate them from others.
You may also think of characteristics of people that you purposefully do not want to include. For instance, if you’re running a usability test on an update to an-iOS mobile app, you may want to exclude folks who don’t have and regularly use iOS devices.
Remember that people may not always be consistent and you need to think of context, too. Let’s say you’re working on an airline website.
The way an administrative assistant looks for and books tickets for their staff might be very different than the way they book travel for their family vacations, so if that matters for your study, be sure to add questions about their context of use.
Think through all of the situations and characteristics that you need to consider and start a list of what must be true or not to be included.
Start by listing out all the potential criteria, and then try to narrow down to only the most crucial items that will identify your target audience.
Remember that demographics aren’t necessarily the things that identify our target audience in UX - things like age, race, and geography may be relevant to your study.
However, it’s more likely you’ll want to include or exclude people based on their usage or experience with a product, knowledge or opinions about a topic area, past behaviors, or roles.
Only include demographic screening questions if they truly help identify who you want to talk to.
It’s also helpful to prioritize the list and look for dependencies so you can decide the order in which to ask the questions.
Because you’re asking potential participants to fill out a survey before they’re promised a role (and therefore compensation), aim to keep the list as brief as possible and eliminate unqualified participants quickly while still keeping a logical flow of questions.
Crafting screening questions
Once you’ve determined what criteria you’re looking to screen in and out, you can start translating your criteria into questions.
Generally, you should have one screening criteria per question and ensure that you have precise, unambiguous answers.
For instance, if you want to be sure that someone is a frequent user of a tool, ask them how often they use it and have answers that are specific time periods. Number of days per week, for example, rather than vague phrases like “often” or “usually.”
Most of your questions should also be closed-ended (or multiple choice), meaning that you’ll provide a list of answers for respondents to choose from.
This makes it faster for them to fill out and easier for you to review responses.
Just be sure to include responses like “other,” “not applicable,” “none of the above,” and “all of the above” so that you don’t force people to choose an incorrect answer and get false positive responses.
Aim for the questions to be easy to answer in terms of what a participant would realistically know or reasonably remember.
For instance, most of us probably remember what we ate for breakfast today and maybe even last week, but unless someone eats the exact same thing every single day, they aren’t going to be able to recall what their morning meal was two summers ago.
On the flipside, try not to make it too apparent who you’re targeting so you can try to avoid fraudulent participants. It’s unfortunate, but especially in some of the self-service panels, you’ll get potential participants who will try to guess who you’re looking for to appear like a good fit.
A good way around this is to provide a few “correct” answers and some false signals that will tip you off that a participant is likely not telling the truth.
For instance, if you need someone who has purchased a particular kind of work boot recently, you could frame the question like, “Which of the following have you purchased in the last month?” and include your target product as well as many similar and dissimilar ones so there isn’t a clear trend in what you’re looking for.
While it’s not impossible for someone to purchase multiple kinds of work boots,ski boots, and cowboy boots in a month, it’s probably a red flag.
If there is a particular nuance, or you really need to be sure of a particular criteria, consider including multiple questions that ask variations of the same question to validate responses. Be sure to keep it brief and try not to be too obvious that you’re looking for similar information.
You may also want to include an open-ended “articulation” question that can either validate or provide some extra clarity to a particular question. These questions give you a sense of how participants will respond.
From the boot example below, perhaps you could ask a follow up like “Describe the process of choosing the work boots you just purchased and tell us where you think you’ll be wearing them.”
If someone gives a lot of detail about what they were looking for, where they shopped, and mentions wearing them in a context that makes sense, like to their construction job, they’re likely a good candidate.
If their response is very brief or vague, like “I needed new boots so I bought online,” they aren’t necessarily lying, but may not be very descriptive. You may want to prioritize other potential participants.
Additional Tips
In addition to the suggestions above, remember that screeners are just a special kind of survey, so most survey best practices still apply.
While there are many survey best practices, pay special attention to craft your screening questions in a neutral, non-leading way.
There should be no hint at what you’re looking to find and no flavor of trying to validate assumptions, so remove any language that could potentially bias your participants, even unintentionally.
You’ll also want to be careful to avoid any questions that ask users to predict their own future behavior. We humans are notoriously awful at this, but we will always try. We often end up giving false information.
Try to avoid too much branching in screening surveys - of course, sometimes there are interdependent questions, but try to keep screeners as simple as possible. Instead, create multiple similar screeners rather than one really complex screener with complicated branching.
Finally, remember that a screener is often the first impression a potential participant will have of you or your client.
Provide a warm introduction that sets context and invites participation without promising an incentive until chosen.
Clearly set expectations about what the research sessions will entail, and what next steps will be. Include details on when they’ll be notified, how long sessions last, where or how they’ll take place, and if users need to do anything to prepare.
You’ll never be able to anticipate all questions potential participants may have, but setting clear expectations helps ensure that your qualified participants will be prepared.
Distributing your screener
Once your questions are written, it’s time to launch and test.
Like we said earlier, the method you use to launch the screener will be dependent on your recruiting method. Recruiting services will likely just need you to write the screening questions in text, and then will follow their own screening methods.
But if you’re recruiting on your own, you’ll need to code the questions into a tool. Tools like Respondent and User Interviews have built-in screener tools, but you can use any survey tool to code your screener.
I always recommend testing the screener before you do any potential participant outreach.
Send it to a colleague to be sure the technology is working smoothly, any branching is coded correctly, and that questions are clear. When in doubt, aim for simpler language and less questions.
If you’re using a DIY recruiting tool, you can then launch your study. The results should start trickling in.
If you’re fully recruiting on your own, you’ll then need to embed the survey url into whatever communications channels you’ll use for outreach. I recommend doing an additional test of emails and social media posts with the link before pushing live.
The length of time you’ll need to keep your screener open can vary wildly, so try checking every few hours the first few days after launch.
Review the responses to see who best matches your criteria, and start the invite process.
Reaching out to qualified respondents in a timely manner will make it more likely they’ll keep your invite top of mind and find time to participate.
Conclusion
Creating a strong screener is one of the core steps to ensuring you have high-quality participants. And high quality participants have a big impact on the quality of your research results.
Remember to clearly define who you do and do not want to talk to based on your research goals, and then follow survey question best practices to get the best results.
Finally, remember that in order to entice people to participate in your studies, you should offer compensation for their time.
The simplest way to send research incentives is with Tremendous. Chat with our team to learn more or sign up and send your first incentive in minutes.