Usability testing and its two evil brothers
Recently I’ve been thinking a lot about how we use usability in the design process or more specifically, our impetus for doing usability. When most people think of usability testing, they think of a user sitting in front of an application, talking about what they think about the application…and in some ways that’s pretty much it. However it’s the objective of the test which is the driving force behind whether or not you will get useful and honest data from the user. What do I mean by useful? It’s information about the application which is validated or uncovered by getting a user’s feedback. Such as running a usability test on a new UI of an application to see how it stacks up against the old version. And by honest I mean credible data that has come unfettered from the user rather than a conclusion they are led to. This article will go through what I believe are the three driving reasons for doing usability testing and why one is good and the other two are less than admirable.
Testing to validate and fine-tune
This is what usability testing should be: testing a design to either validate that it is or isn’t on the right track or testing the design to see what needs to be clarified or iterated on. This is where usability is the most useful and helpful. Nearly all kinds of usability tests are looking for something along these lines; looking for gaps between a design and a user’s expectations/goals or confirmation that they are on the right track.
Proving to non-believers
This is the first of the two evil brothers of usability testing. Proving to non-believers might be my least favorite thing to witness and resembles more an exercise in a designer’s sanity than anything else. What I’m talking about is using usability to prove internally that a designer knows what they are doing. The best example of this is using usability to decide whether or not to use an industry standard vs. the deprecated industry standard that the application currently uses. Problem is, there’s nothing to test. It’s a waste of time to go through the motions of setting up a usability test to find out that a round wheel rolls better than a square one. It’s ok to test multiple competing designs and compare them, as long as the designs have some level of equivalence by representing pros and cons of each other. But if you don’t trust your designer’s judgment to make baseline calls in their designs you have much bigger problems than usability is going to solve.
Getting the answers you want to hear
The second evil brother is the worst of the two because it means the usability test has been structured to lead the user’s answer to a desired result. This usually occurs for two reasons 1) resources don’t want to be spent to solve the issues with the current application 2) design doesn’t want to iterate any further. Rather than admitting this, usability testing is used to prove that the current state of the application is “fine” and needs no further work.
But wait, how can you lead a user to say something? Nothing is keeping them from telling the truth, right?
Let’s say there is an application that has a feature in it you want to “test”, let’s call it feature A. Feature A is used for data entry and is touted for being flexible and customizable. In reality users have repeatedly complained it is extremely difficult to configure and has a disjointed work flow.
- To start, prep the usability test so the difficult parts of feature A are partially configured ahead of time, leaving the most obvious part for the user to do during the test.
- Secondly, write the script so it walks the user explicitly through the data entry process, making the flow seem obvious.
- Next, point out the handful of data from overly honest users as outliers.
- And Presto! The user feedback comes back, unsurprisingly, positive for feature A.
Now I don’t believe that most usability tests are rooted in skepticism, blatantly dishonest or are trying to create inflated, self-serving data; the evil brother examples are fairly extreme. The important part to keep in mind is making sure your usability tests aren’t structured to cut a few corners so you have a slight advantage. Do minimal directive scripting by keeping the task description fairly general. Keep the objective of your tests clear and do your best to replicate a state in the application which reflects a real world scenario; the data will be more accurate from a practicality standpoint and will seem more familiar to your user.