Start reading from here.
In the fast-paced world of product design, usability testing often takes a backseat to flashier milestones like feature launches or pixel-perfect mockups.
But behind every successful digital experience is a well-run usability test—quietly informing better decisions, guiding design direction, and preventing costly missteps. In Part 2 of our series, we dive into the different types of usability testing and how to thoughtfully plan and execute sessions that yield meaningful results.
The first step in any usability testing journey is choosing the right type of test—and that decision depends largely on your goals, timeline, and available resources.
Moderated Testing—either in-person or remote—is one of the most in-depth approaches. In this format, a facilitator guides participants through tasks, observes their behaviour in real time, and asks follow-up questions. This method offers rich, contextual feedback and allows for immediate clarification. It’s especially useful when you’re exploring complex workflows or looking to gather insights.
Unmoderated Remote Testing, on the other hand, is faster and more scalable. Participants complete tasks on their own, without a facilitator present. While this method may lack the depth of a conversation, it’s ideal for testing with a larger group and gathering data quickly, especially when time or budget is limited.
For those seeking fast, informal insights, Guerrilla Testing offers a scrappy alternative. Typically conducted in public spaces or real-world environments, this approach is best suited for early-stage concepts. It’s great for gut checks and directional feedback before investing in more refined design iterations.
Then there’s A/B or Preference Testing, which—while not a traditional usability method—is still valuable. These tests are used to compare design variations and validate specific hypotheses, particularly after usability issues have been addressed. They help teams decide between two or more options based on real user preferences.
Once you’ve selected the appropriate testing type, it’s time to plan your session strategically. Begin by clarifying your goals and deciding what exactly you want to test. Focus on specific user flows or features that tie directly to your design objectives.
Whether you’re testing a new checkout process or onboarding flow, narrowing the scope ensures deeper insights.
Writing realistic, task-based scenarios is essential. Instead of instructing users to “click the cancel button,” present a goal like “find a way to cancel your booking.” This encourages natural behaviour and surfaces usability issues that scripted actions might miss.
Equally critical is choosing the right participants. Your insights are only as good as your sample. Make sure the people testing your product reflect your actual users—consider their demographics, goals, tech habits, and familiarity with your product type. Testing with the wrong group can lead you astray, resulting in data that distorts your product’s direction.
Selecting the right tools can also make or break the session. For live moderated tests, platforms like Zoom, Lookback, or Hotjar allow for real-time observation and interaction. If you’re opting for unmoderated testing, tools like Maze, UsabilityHub, or PlaybookUX help streamline the process and offer robust analysis features for scaling insights.
Through it all, the golden rule remains stay curious. Some of the most impactful discoveries happen when users do something unexpected—click a button you thought was clear, struggle with a task you assumed was intuitive, or completely skip a step you thought was essential. These surprises aren’t setbacks; they’re opportunities. Embrace them.
In usability testing, the goal isn’t perfection—it’s progress. And the more open you are to the unexpected, the closer you get to creating a product that truly works for your users.
Usability testing is a cornerstone of effective product design, but its value hinges on having the right foundation: a solid test script. More than just a checklist, a usability test script ensures consistency across sessions, minimizes bias, and keeps the research aligned with actual goals. A good script helps testers stay neutral, participants stay focused, and insights stay actionable.
Every solid usability script begins with a structured flow. First comes the Introduction and Consent phase, where participants are welcomed, the session’s purpose is outlined, and they’re reassured that it’s the product—not them—being tested.
Consent to record the session is usually obtained here. A typical prompt might sound like, “Today, I’m going to ask you to perform a few tasks using a prototype. This isn’t a test of your ability—there are no right or wrong answers. Please feel free to speak your thoughts aloud as you go.”
Following this is a set of Contextual Warm-Up Questions. These are designed to get participants talking and to give facilitators insights into their background and habits.
A common opener might be, “Can you tell me a bit about how you usually manage tasks?” or “How do you typically book appointments?”
When it comes to the actual Task Instructions, clarity and realism are key. Instructions should be goal-oriented rather than prescriptive. Instead of saying, “Click the profile icon, then hit edit,” a better approach would be, “You’d like to update your contact details. How would you do that?” This allows the participant to navigate naturally, revealing usability issues that might otherwise remain hidden.
Follow-Up Questions help dig deeper into the participant’s experience. After each task or at the end of the session, open-ended queries like “What did you expect to happen?” or “Was anything confusing?” provide critical context to observed behaviours. The session wraps with a quick Thank You, a chance to gather any last thoughts, and clarification of any ambiguous interactions noted during the test.
Of course, even the most perfectly written script will fall short if it’s tested on the wrong people. Finding the Right Participants is just as crucial. The closer participants match your target users—whether that means age, experience level, goals, or device usage—the more accurate your results will be. Diversity matters too; a range of perspectives can reveal blind spots.
Pre-screening questions help filter out mismatches, and when in doubt, start small. The Nielsen Norman Group recommends just five users to uncover up to 85% of usability issues. Even three to five sessions with representative users can surface critical design flaws.
There’s no shortage of tools to support testing, whether you’re scrappy or well-funded. For Live Moderated Testing, platforms like Zoom, Google Meet, and Lookback are popular. Unmoderated Remote Testing tools include Maze, Useberry, and PlaybookUX. For Prototyping, Figma, InVision, and Marvel are go-tos, while tools like Respondent.io and User Interviews help with Participant Recruiting. For Note-Taking and Analysis, many teams rely on Notion, Dovetail, or Miro. The right tool depends on your workflow and how deep your analysis needs to go.
When you’re actually Running the Test, there are a few key best practices. Let participants speak freely—encourage them to “think aloud” throughout the session. Resist the urge to guide or correct them, even if they seem lost. Instead, observe where they struggle. Watch their body language; hesitation, confusion, or frustration often speaks louder than words. Maintain a calm, neutral tone, and while it’s important to stick to your script, allow room for exploration if a participant takes an unexpected path.
What you ask matters just as much. Stick to questions like “What were you expecting to see?” or “What would you do next?” Avoid vague or leading questions such as “Did you like it?” or “Was that confusing?” which risk biasing responses. Instead, focus on understanding users’ expectations and reactions through what they did and said—not what they speculate they might do.
Accurate note-taking is essential for turning observations into action. Great notes are specific and contextual. Record what happened (e.g., “User hovered over button but didn’t click it”), what was said (“I wasn’t sure this was clickable”), and what you inferred (“Possible issue with visual hierarchy”). Some teams divide roles between a facilitator and a dedicated note-taker to ensure nothing is missed. A simple spreadsheet or template—organized by tasks, timestamps, user quotes, and priority levels—can streamline the analysis process.
Once testing is complete, it’s time to synthesize the findings. Look for common patterns and recurring pain points.
Were users confused by navigation? Did multiple people hesitate at the same step? Group similar insights under broader themes—like “CTA visibility” or “form label confusion”—and prioritize based on severity and frequency.
Pairing usability feedback with behavioural analytics or support data can strengthen the case for specific design changes.
Ultimately, Usability Testing Should Be a Habit, Not a One-Off. It’s not just another item on a project checklist—it’s a design mindset.
The more frequently you test, the more intuitive your understanding of your users becomes. Whether you’re a solo designer, part of a lean startup, or embedded in a larger product team, usability testing belongs in your workflow.
Start small, test often, and build feedback loops into your design culture. Over time, your product will improve—and so will your users’ experience.
*Theresa Okonofua is a Product Designer focused on creating inclusive, accessible digital products. She combines deep user research with thoughtful design to craft solutions for complex, often overlooked user needs.