CHAPTER 50 Testing Usability
Usability testing is typically the most mature and straightforward form of discovery testing, and it has existed for many years. The tools are better and teams do much more of this now than they used to, and this is not rocket science. The main difference today is that we do usability testing in discovery—using prototypes, before we build the product—and not at the end, where it's really too late to correct the issues without significant waste or worse.
If your company is large enough to have its own user research group, by all means secure as much of their time for your team as you absolutely can. Even if you can't get much of their time, these people are often terrific resources, and if you can make a friend in this group, it can be a huge help to you.
If your organization has funds earmarked for outside services, you may be able to use one of many user research firms to conduct the testing for you. But at the price that most firms charge, chances are that you won't be able to afford nearly as much of this type of testing as your product will need. If you're like most companies, you have few resources available, and even less money. But you can't let that stop you.
So, I'll show you how to do this testing yourself.
No, you won't be as proficient as a trained user researcher—at least at first—and it'll take you a few sessions to get the hang of it, but, in most cases, you'll find that you can still identify the serious issues and friction points with your product, which is what's important.
There are several excellent books that describe how to conduct informal usability testing, so I won't try to recreate those here. Instead, I'll just emphasize the key points.
Recruiting Users to Test
You'll need to round up some test subjects. If you're using a user research group, they'll likely recruit and schedule the users for you, which is a huge help, but if you're on your own, you've got several options:
- If you've established the customer‐discovery program I described earlier, you are probably all set—at least if you're building a product for businesses. If you're working on a consumer product, you'll want to supplement that group.
- You can advertise for test subjects on Craigslist, or you can set up an SEM campaign using Google AdWords to recruit users (which is especially good if you are looking for users that are in the moment of trying to use a product like yours).
- If you have a list of e‐mail addresses of your users, you can do a selection from there. Your product marketing manager often can help you narrow down the list.
- You can solicit volunteers on your company website—lots of major companies do this now. Remember that you'll still call and screen the volunteers to make sure the people you select are in your target market.
- You can always go to where your users congregate. Trade shows for business software, shopping centers for e‐commerce, sports bars for fantasy sports—you get the idea. If your product is addressing a real need, you usually won't have trouble getting people to give you an hour. Bring some thank‐you gifts.
- If you're asking users to come to your location, you will likely need to compensate them for their time. We often will arrange to meet the test subject at a mutually convenient location, such as a Starbucks. This practice is so common it's usually referred to as Starbucks testing.
Preparing the Test
- We usually do usability testing with a high‐fidelity user prototype. You can get some useful usability feedback with a low‐ or medium‐fidelity user prototype, but for the value testing that typically follows usability testing, we need the product to be more realistic (more on why later).
- Most of the time, when we do a usability and/or value test, it's with the product manager, the product designer, and one of the engineers from the team (from those that like to attend these). I like to rotate among the engineers. As I mentioned earlier, the magic often happens when an engineer is present, so I try to encourage that whenever possible. If you have a user researcher helping with the actual testing, they will typically administer the test, but absolutely the product manager and designer must be there for each and every test.
- You will need to define in advance the set of tasks that you want to test. Usually, these are fairly obvious. If, for example, you're building an alarm clock app for a mobile device, your users will need to do things like set an alarm, find and hit the snooze button, and so on. There may also be more obscure tasks, but concentrate on the primary tasks—the ones that users will do most of the time.
- Some people still believe that the product manager and the product designer are too close to the product to do this type of testing objectively, and they may either get their feelings hurt or only hear what they want to hear. We get past this obstacle in two ways. First, we train the product managers and designers on how to conduct themselves, and second, we make sure the test happens quickly—before they fall in love with their own ideas. Good product managers know they will get the product wrong initially and that nobody gets it right the first time. They know that learning from these tests is the fastest path to a successful product.
- You should have one person administer the usability test and another person taking notes. It's helpful to have at least one other person to debrief with afterward to make sure you both saw the same things and came to the same conclusions.
- Formal testing labs will typically have setups with two‐way mirrors or closed‐circuit video monitors with cameras that capture both the screen and the user from the front. This is fine if you have it, but I can't count how many prototypes I've tested at a tiny table at Starbucks—just big enough for three or four chairs around the table. In fact, in many ways, this is preferable to the testing lab because the user feels a lot less like a lab rat.
- The other environment that works really well is your customer's office. It can be time consuming to do, but even 30 minutes in their office can tell you a lot. They are masters of their domain and often very talkative. In addition, all the cues are there to remind them of how they might use the product. You can also learn from seeing what their office looks like. How big is their monitor? How fast is their computer and network connectivity? How do they communicate with their colleagues on their work tasks?
- There are tools for doing this type of testing remotely, and I encourage that, but they are primarily designed for usability testing and not for the value testing that will usually follow. So, I view the remote usability testing as a supplement rather than a replacement.
Testing Your Prototype
Now that you've got your prototype ready, lined up your test subjects, and prepared the tasks and questions, here are a set of tips and techniques for administering the actual test.
We want to learn whether the user or customer really has the problems we think they have, and how they solve those problems today, and what it would take for them to switch.
Before you jump in, we want to take the opportunity to learn how they think about this problem today. If you remember the key questions from the Customer Interview Technique, we want to learn whether the user or customer really has the problems we think they have, and how they solve those problems today, and what it would take for them to switch.
- When you first start the actual usability test, make sure to tell your subject that this is just a prototype, it's a very early product idea, and it's not real. Explain that she won't be hurting your feelings by giving her candid feedback, good or bad. You're testing the ideas in the prototype, you're not testing her. She can't pass or fail—only the prototype can pass or fail.
- One more thing before you jump into your tasks: See if they can tell from the landing page of your prototype what it is that you do, and especially what might be valuable or appealing to them. Again, once they jump into tasks, you'll lose that first‐time visitor context, so don't waste the opportunity. You'll find that landing pages are incredibly important to bridging the gap between expectations and what the product does.
- When testing, you'll want to do everything you can to keep your users in use mode and out of critique mode. What matters is whether users can easily do the tasks they need to do. It really doesn't matter if the user thinks something on the page is ugly or should be moved or changed. Sometimes misguided testers will ask users questions like “What three things on the page would you change?” To me, unless that user happens to be a product designer, I'm not really interested in that. If users knew what they really wanted, software would be a lot easier to create. So, watch what they do more than what they say.
- During the testing, the main skill you have to learn is to keep quiet. When we see someone struggle, most of us have a natural urge to help the person out. You need to suppress that urge. It's your job to turn into a horrible conversationalist. Get comfortable with silence—it's your friend.
- There are three important cases you're looking for: (1) the user got through the task with no problem at all and no help; (2) the user struggled and moaned a bit, but he eventually got through it; or (3) he got so frustrated he gave up. Sometimes people will give up quickly, so you may need to encourage them to keep trying a bit longer. But, if he gets to the point that you believe he would truly leave the product and go to a competitor, then that's when you note that he truly gave up.
- In general, you'll want to avoid giving any help or leading the witness in any way. If you see the user scrolling the page up and down and clearly looking for something, it's okay to ask the user what specifically she's looking for, as that information is very valuable to you. Some people ask users to keep a running narration of what they're thinking, but I find this tends to put people in critique mode, as it's not a natural behavior.
- Act like a parrot. This helps in many ways. First, it helps avoid leading. If they're quiet and you really can't stand it because you're uncomfortable, tell them what they're doing: “I see that you're looking at the list on the right.” This will prompt them to tell you what they're trying to do, what they're looking for, or whatever it may be. If they ask a question, rather than giving a leading answer, you can play back the question to them. They ask, “Will clicking on this make a new entry?” and you ask in return, “You're wondering if clicking on this will make a new entry?” Usually, they will take it from there because they'll want to answer your question: “Yeah, I think it will.” Parroting also helps avoid leading value judgments. If you have the urge to say, “Great!” instead you can say, “You created a new entry.” Finally, parroting key points also helps your note taker because she has more time to write down important things.
- Fundamentally, you're trying to get an understanding of how your target users think about this problem and to identify places in your prototype where the model the software presents is inconsistent or incompatible with how the user is thinking about the problem. That's what it means to be counterintuitive. Fortunately, when you spot this, it is not usually hard to fix, and it can be a big win for your product.
- You will find that you can tell a great deal from body language and tone. It's painfully obvious when they don't like your ideas, and it's also clear when they genuinely do. They'll almost always ask for an e‐mail when the product is out if they like what they see. And, if they really like it, they'll try to get it early from you.
Summarizing the Learning
The point is to gain a deeper understanding of your users and customers and, of course, to identify the friction points in the prototype so you can fix them. It might be nomenclature, flow, visual design issues, or mental model issues, but as soon as you think you've identified an issue, just fix it in the prototype. There's no law that says you have to keep the test identical for all of your test subjects. That kind of thinking stems from misunderstanding the role this type of qualitative testing plays. We're not trying to prove anything here; we're just trying to learn quickly.
After each test subject, or after each set of tests, someone—usually either the product manager or the designer—writes up a short summary e‐mail of key learnings and sends it out to the product team. But forget big reports that take a long time to write, are seldom read, and are obsolete by the time they're delivered because the prototype has already progressed so far beyond what was used when the tests were done. They really aren't worth anyone's time.
The point is to gain a deeper understanding of your users and customers and, of course, to identify the friction points in the prototype so you can fix them.