Table of Contents Hide
- What is usability testing?
- Choosing between moderated vs. unmoderated usability tests
- Designing a usability test
- Additional considerations for unmoderated usability tests:
- Additional considerations for moderated usability tests:
- Synthesizing Usability Test Results
That new app/car/hiking boot/ticket booking website you built looks slick.
But can people actually use it?
Are the most mission-critical features obvious to the end user? Can drivers figure out how to increase windshield wiper speed without an instruction manual? Do the boots provide ankle support without chafing? Do people click in the right places to complete a task?
These are questions that usability testing can help you answer. Usability testing is a method that UX researchers and designers use to assess the intuitiveness of a specific function, process step, or interface. It can help uncover usability issues like confusing navigation (information architecture), unclear language (UX copy), and visual design.
We’ve already written a comprehensive guide to qualitative usability testing, so we won’t rehash all the details here—although do check out the chapter in the UX Research Field Guide if you’re looking for even more depth.
Instead, we wanted to share some essential usability testing best practices. The recommendations below come straight from our Research Playbook, an internal resource that the UI Research Team has been building over the last 6(ish) months.
Need help sourcing an audience for usability testing? Have specific criteria that you need to screen for? User Interviews offers a complete platform for finding and managing participants. Recruit your first three participants free.
What is usability testing?
First thing’s first… usability testing is not synonymous with user experience research. But it is an important part of it.
During a usability test, you ask a participant to perform a series of tasks, and then observe and record how well they are able to accomplish those tasks. The goal is to identify any frustrations and pain points in order to make recommendations or improvements that will increase people’s ability to use the product or service.
Choosing between moderated vs. unmoderated usability tests
Usability tests can be unmoderated (without a moderator) or moderated (live). The approach you take will often depend on the goals you are looking to achieve from your study.
Moderated usability testing
Moderated usability tests are a great way to get feedback from users about a design or prototype. During a moderated usability study, the moderator is interacting live with the user.
At UI, we use Zoom for moderated usability testing.
- Gain deeper understanding via follow-up or probing questions
- See exact user paths while following along live.
- Can take longer to gather insights
- Increased costs for incentives
- Typically smaller sample size which means less statistical precision (less confidence to generalize findings)
Unmoderated Usability Testing
Unmoderated usability tests are a great way to get feedback from users about a design or prototype quickly. In an unmoderated test, the participant completes tasks on their own, without the presence of a moderator.
At UI, we use Maze for unmoderated usability testing.
- Participants complete on their own schedule
- Get feedback quickly (sometimes less than 8 hrs!)
- Can have larger sample size & more statistical precision (more confidence to generalize findings)
- Typically, less time-intensive to manage & analyze
- Less costly incentives
- Unable to ask follow up questions live to participants
- Less qualitative insight into user challenges
Designing a usability test
1. Clarify the decision you need to make (the job to be done)
Before launching into any research project, you should take time to reflect on the decision you are looking to make based on what you plan to learn. (You can read more about planning user research in the UX Research Field Guide.)
2. Identify the tasks you want to focus on (the scope)
It may sound obvious, but it’s worth repeating: To conduct usability testing, you must have something specific that you want to test. Whatever those elements may be, make sure you’re able to clearly articulate what you’re testing before you start designing prototypes and sourcing participants.
There is no magic number of tasks you should ask participants to perform during a usability test. But, as a general guideline, we recommend having no more than 3-4 tasks per study.
💡 TIP: Be aware of how you order your tasks. You don’t want to prime users toward paths of success for subsequent tasks based on what they’ve already done. If you’re unsure about the ordering, randomize the task order for every participant if tasks are not consecutive.
3. Define the metrics you’ll collect (the data)
During usability testing, you can use metrics to measure both actions (what people do) and attitudes (how people feel). It’s helpful to gather both during testing because oftentimes people’s behavior (performance) does not match their perception.
- Action measures: success/ failure rates, time on task, etc.
- Attitudinal measures: self-reported satisfaction, ease, or comfort
4. Prepare the test (the script and prototypes)
Your participants are going to need instructions—and something to test!
Unless you’re conducting usability testing on an existing, already-built product or feature, you’re going to need a prototype. Whether you use high-fidelity or low-fidelity prototypes depends on what part of the process you’re in and what questions you’re trying to answer. If you’re vetting a new idea or concept, low-fidelity prototypes can be enough to get you started. But for late-stage testing that results in viable user feedback, we almost always recommend high-fidelity prototypes.
As for your usability testing script, you want to make sure that each participant has some context about what they should be thinking about as they complete the task. But! You also don’t want to give away too much information—otherwise you risk invaliding the results.
An example of a helpful but not-too-revealing prompt might be: “Imagine you’re a new Research Hub user and you want to see what current projects your team are working on. How would you go about this?”
Sample usability questions to incorporate into your test
If you are looking to assess ease of use, you might ask:
- Question: How easy or difficult was it to complete this task?
- Response: A scale of 1-7 with each extreme labeled (1 = Very difficult, 7 = Very easy)
Or maybe you want to gauge whether the feedback you provide once a user completes a task is sufficient. You can assess a user’s confidence by asking:
- Question: How confident are you that you completed this task correctly?
- Response: A scale of 1-7 with each extreme labeled (1 = Not at all confident, 7 = Very confident)
Moderated studies typically involve some qualitative questions in addition to quantitative tests. A few examples of qualitative usability questions:
- What do you think the purpose of the page is?
- What motivated you to click [interaction]?
- How was the experience of using the product to complete this task?
- What could make that easier for you?
5. Recruit the right participants (the usability testers)
User Interviews is a robust solution for user research recruitment and participant management—but you probably already knew that!
What you may not know is that we connect researchers with participants for all kinds of studies—from qualitative (e.g. user interviews) to quantitative (e.g. surveys). And we play well with others—you can recruit from our panel of over 1 million participants, regardless of the usability testing tools you plan to use.
If you don’t already have a recruitment plan in place (or heck, even if you do), we highly recommend reading the chapters on research recruiting, screening, and incentives for a deep dive into this all-important step!
Read more: How Many Participants Do You Need for a Usability Test?
6. Finally, do a test drive!
Always test your test.
Before you conduct your test with participants, do a trial run or two with a volunteer from your team. This time is invaluable and will help get a sense of the flow and timing of the test. It also can reveal any changes or adjustments that might need to be made to the prototype design itself.
As for how long a usability test should take, unmoderated sessions typically last 15-20 minutes long, while moderated sessions can be anywhere from 45 minutes to an hour.
Additional considerations for unmoderated usability tests:
Below are some tips to keep in mind for unmoderated tests:
- Ensure your tasks are simple and explained clearly: Remember, you are not there with the participant to clarify details. Shorter tasks that do not have a lot of steps are best. Use simple language to explain the task, without giving the answer away.
- Have clear success criteria: Given you won’t be able to watch the participant succeed or fail in a task, you’ll want to pick tasks that have a clear success state.
- Clearly explain any device or browser requirements: Some unmoderated testing tools, prototype platforms, etc. will require that people use a certain browser or device. Additionally, if you are capturing screen recordings, website clicks, or tracking URLs, the testing tool may require a browser extension to be downloaded. Ensure this information is clearly detailed in your project description, screener (if applicable), and opening message of your study.
- Keep it brief: In unmoderated studies, it’s especially important to be mindful of time—in our experience, participants tend to drop off after the 15-minute mark.
Additional considerations for moderated usability tests:
Running a usability test live with a participant changes the dynamics of the session. Here are some tips to keep in mind during moderation:
- Avoid explaining your design to participants: You may have a desire to explain the design to the participant. Avoid doing this as it will introduce bias to your results.
- Limit discussion and going off script: Unlike generative interviews, usability tests are less fluid and aim to walk participants through a set of tasks to complete. Follow the same protocol or approach for each user to be able to easily compare error rates across all participants. (The exception to this is if you notice a major flaw in your decision during any of the sessions, in which case you can modify or update the design before showing it to additional participants.)
- Boomerang your question back to the participant: Participants will often ask you questions about how a prototype or design is intended to work. In this case, the facilitator should boomerang the question back to the participant.
- Participant: Do I have to click this to launch the project?
- Moderator: What do you think? OR What would you do in this situation?
Synthesizing Usability Test Results
Since the goal of usability tests is to identify issues or problems that participants encounter, you’ll often want to synthesize your insights in a way that allows you to identify the severity of the issue and how frequently it occurs.
Common severity levels might be:
- Critical: High impact problems that prevent a user from completing a task
- Moderate: Issue causes some difficulty but doesn’t prevent the user from completing the task
- Low: Minor problem that doesn’t impact the users ability to complete the task
From there you’ll prioritize the higher severity issues to make critical improvements to your designs. And of course, if the changes are significant, you’ll want to conduct further usability tests, rinse, and repeat until you feel confident that your design is intuitive to your intended users.
Coming soon: A usability findings template that you can use as a starting point for sharing your results with others.
User Interviews is the fastest and easiest way to recruit and manage participants for research. Get insights from any niche within our pool of over 1.5 million participants with Recruit, or build and manage your own panel with Research Hub. Visit the pricing page to get started.
This article was written by Morgan Mullen and Lily Hanes.
Read the full article here