Table of Contents Hide
Our operations team has managed thousands of research projects. They’ve seen the full spectrum of recruits—from dazzlingly easy ones to hyper-specific recruits that take time and leg work to fill. Each project is a lesson learned—meaning we’ve got a thing or two to say about what works.
Here are seven of the most common screener survey mistakes we encounter, along with tried-and-true strategies and suggestions for getting right.
1. Making your criteria too niche
One of the keys to writing a good screener survey is to know who your ideal participants are (and aren’t). You need to know who you’re looking for in order to weed out folks who don’t have the insights you’re looking for. The catch is that, while it’s good to have a distinct audience in mind for your research, making your qualifying criteria too specific can hinder your research.
Good luck finding a large pool of participants when your criteria is, say, “oceanographers between the ages of 21-25 who live in Indiana.” We encourage researchers to have flexibility around some of their criteria in case the audience is too niche to reach your goals.
2. Focusing on demographics over behaviors
Including too many demographic and geographic characteristics in your screener survey is a common mistake that can not only make your criteria too niche, but can also introduce bias and make your survey results less representative of the real world you’re trying to study.
Don’t include geographic questions in your screener unless you have a clear reason to target based on location (like doing an in-person study or, say, researching the recycling habits of people in Houston vs. Austin). Similarly, avoid targeting based on age, ethnicity, income, education level, etc. unless these characteristics are genuinely disqualifying factors for your study.
A good rule of thumb: Worry less about how people are categorized on a census and more about how they think, feel and behave.
Screening for behaviors and psychographics lets you group people based on how they live, what they value, and how they relate to your product—all of which is probably much more relevant to your research than whether they graduated from a four-year college.
Of course, sometimes demographic data is relevant. For instance, if you want to test for accessibility with a mix of racial/gender identities, age ranges, and educational backgrounds, adding demographic criteria will allow you to target a diverse audience.
Rather than automatically accepting or rejecting based on this criteria, you can use demographics to filter for a variety of participants as a final step.
💡 Psst: Some recruiting services (like User Interviews) will automatically provide basic demographic, geographic, and technographic information for you. That means you don’t need to include it in your screener, and can focus on the interesting behavioral questions instead.
3. Making your criteria too broad
As you can see, it’s important not to make your screener criteria too narrow. But it’s equally important to make sure that your recruit has focus. Otherwise, you might lose track of what you’re trying to accomplish.
Running a study about iOS mobile apps? You could include questions that filter out Android users, people with outdated iPhones, or perhaps people who don’t have a smartphone at all.
Take the time up front to define the audience that best fits your research needs. This will prevent an overly broad recruit and will give you a more refined candidate pool that you won’t have to comb through later.
4. Asking leading questions
A leading question prompts or encourages someone to give a desired answer. Oftentimes this is by design (which is annoying, right?), but can also happen by mistake if you’re not careful about how you phrase a question.
This can be especially problematic in research. Leading questions can skew research by indirectly nudging users to answer a certain way. When this happens in screener surveys, it can leave you with a pool of participants who aren’t actually a good fit for your study.
A good way to identify whether a question might be leading is if it includes a hint, excludes possible answers, or influences responses with emotive language.
📖 How to edit your screener survey on an active project
Examples of leading questions:
Example 1: This question assumes the respondent hated the season and uses emotive language that may influence the answer.
Leading: On a scale of 0 to “Still resentful”, how much did you absolutely despise the last season of Game of Thrones?
Not leading: On a scale from 1 to 10, where 1 is terrible and 10 is excellent, rank how you regard the last season of Game of Thrones.
Example 2: The “wasn’t it” prompts agreement.
Leading: Wonder Woman 1984 was godawful, wasn’t it?
Not leading: How did you feel about Wonder Woman 1984?
- I really enjoyed it and would recommend the movie to others.
- I liked it but thought some things could have been better.
- I feel neutral—neither liked nor disliked it.
- I did not enjoy the movie very much and would not recommend it to others.
- I hated the movie and would tell others to avoid watching it.
Example 3: The word “amazing” could lead to respondent bias.
Leading: Do you like our amazing Zoom integration? (😉)
Not leading: How would you rate our Zoom integration?
- Very good
Example 4: The word “always” plus a binary option would almost invariably lead participants to to say “no.”
Leading: Do you always eat ramen for dinner?
Not leading: How often do you eat ramen?
- Every day
- Frequently – A few times a week
- Often – Once a week/a few times a month
- Occasionally — Once a month
- Rarely — A few times a year
Another way to avoid leading questions is to provide a series of unrelated options as answers. For example, if you want to screen for users who have a high level of concern around internet privacy issues, rather than diving right into questions about internet privacy, you can create a question like this to get to the people who really care:
5. Only asking yes/no questions
Binary questions can only tell you so much At User Interviews we offer a variety of question formats—we encourage you to use them!.
Sure, you could list a bunch of questions with simple quick-click options, but why not add a little fun and nuance into the mix? Plus, adding some more interesting questions may provide you with insights you didn’t anticipate.
In one study, for example, a researcher was interested in speaking to people who proactively look for deals when shopping online for clothing. The original screener looked something like this:
This was such a missed opportunity to gain some valuable consumer insights! Our revisions, pictured below, would add some much-needed clarity to candidate responses.
A few minutes of thoughtful revisions can really improve data collection. And as you can see, screeners don’t need to be long to be insightful.
Another reason to reword your Y/N questions is that they can be leading, or give away too much about the intent of your stufy up front. When you give away the plot, you can devalue the screener process itself, which is designed to find participants who are a good fit for your study—not just folks who give the answers you want to hear. Let’s say your research is about country music. Rather than this:
… try something like this:
Related tip: Rather than calling this study “Country Music Lovers Only,” you might want to try something more generic, like “Music Study”. You’ll be able to pull in a wide audience and carefully sift out the true country music fans. In addition, you’ll gain more information about your participant’s interests if you find yourself wanting to dig into data further.
6. Not using skip logic
Skip logic is a great way to customize your screener—so don’t skip it! You can use skip logic to customize which questions a participant sees, depending on their responses to a previous question. You can also use it to avoid leading people to certain answers to ensure sure you’re getting honest and accurate responses.
Let’s say you are conducting a study on pet ownership. You might want to capture more information about the type of pets someone has, but you also don’t want to exclude people who don’t have pets.
In the example below, you’ll notice that the questions on page two are only relevant for respondents who do not have pets. Skip logic allows you to set up the respondent journey so that pet owners will essentially jump over these questions. It creates a more personalized experience and avoids confusion for the respondent, all while gathering the information you need.
7. Not getting to know your participants
The purpose of a screener is to filter your candidate pool—but that doesn’t mean you can’t also use these questions to get a sense of a candidate’s personality, and even have a little fun!
We encourage researchers to include at least one articulation question at the end of the screener. Perhaps you’re looking for creative-thinkers to join in an interactive in-person focus group; use an open-ended question to capture more insight about the way they approach situations, and think/problem solve. Here are a few of our favorites:
- What was the last TV show you enjoyed? Why should I watch it?
- If you could only eat one type of food for the rest of your life, what would it be?
- You’re a new addition to the crayon box. What color would you be and why?
- Tell me about a product you recently purchased and liked. Why should I buy it?
💡 Read more: How to Write Screener Surveys to Capture the Right Participants
Putting it all into practice
Feeling ready to put this new knowledge into practice? If you already use User Interviews for your research, we hope this article will help you craft the perfect screener survey for your next project.
📖 Pro tip: Research Hub customers can reuse their new and improved screener across multiple projects. Learn more about how to use that feature here.
Note: This article was originally published in 2018 by Melanie Albert, who helped manage thousands of successful research projects during her time as VP of Operations at User Interviews. It has been updated in 2021 with fresh content and insights.
Read the full article here