Usability testing and its two evil brothers

Usability banditRecently I’ve been thinking a lot about how we use usability in the design process or more specifically, our impetus for doing usability. When most people think of usability testing, they think of a user sitting in front of an application, talking about what they think about the application…and in some ways that’s pretty much it. However it’s the objective of the test which is the driving force behind whether or not you will get useful and honest data from the user. What do I mean by useful? It’s information about the application which is validated or uncovered by getting a user’s feedback. Such as running a usability test on a new UI of an application to see how it stacks up against the old version. And by honest I mean credible data that has come unfettered from the user rather than a conclusion they are led to. This article will go through what I believe are the three driving reasons for doing usability testing and why one is good and the other two are less than admirable.

Testing to validate and fine-tune

This is what usability testing should be: testing a design to either validate that it is or isn’t on the right track or testing the design to see what needs to be clarified or iterated on. This is where usability is the most useful and helpful. Nearly all kinds of usability tests are looking for something along these lines; looking for gaps between a design and a user’s expectations/goals or confirmation that they are on the right track.

Proving to non-believers

This is the first of the two evil brothers of usability testing. Proving to non-believers might be my least favorite thing to witness and resembles more an exercise in a designer’s sanity than anything else. What I’m talking about is using usability to prove internally that a designer knows what they are doing. The best example of this is using usability to decide whether or not to use an industry standard vs. the deprecated industry standard that the application currently uses. Problem is, there’s nothing to test. It’s a waste of time to go through the motions of setting up a usability test to find out that a round wheel rolls better than a square one. It’s ok to test multiple competing designs and compare them, as long as the designs have some level of equivalence by representing pros and cons of each other.  But if you don’t trust your designer’s judgment to make baseline calls in their designs you have much bigger problems than usability is going to solve.

Getting the answers you want to hear

The second evil brother is the worst of the two because it means the usability test has been structured to lead the user’s answer to a desired result. This usually occurs for two reasons 1) resources don’t want to be spent to solve the issues with the current application 2) design doesn’t want to iterate any further. Rather than admitting this, usability testing is used to prove that the current state of the application is “fine” and needs no further work.

But wait, how can you lead a user to say something? Nothing is keeping them from telling the truth, right?

Let’s say there is an application that has a feature in it you want to “test”, let’s call it feature A. Feature A is used for data entry and is touted for being flexible and customizable. In reality users have repeatedly complained it is extremely difficult to configure and has a disjointed work flow.

  • To start, prep the usability test so the difficult parts of feature A are partially configured ahead of time, leaving the most obvious part for the user to do during the test.
  • Secondly, write the script so it walks the user explicitly through the data entry process, making the flow seem obvious.
  • Next, point out the handful of data from overly honest users as outliers.
  • And Presto! The user feedback comes back, unsurprisingly, positive for feature A.

Now I don’t believe that most usability tests are rooted in skepticism, blatantly dishonest or are trying to create inflated, self-serving data; the evil brother examples are fairly extreme. The important part to keep in mind is making sure your usability tests aren’t structured to cut a few corners so you have a slight advantage. Do minimal directive scripting by keeping the task description fairly general. Keep the objective of your tests clear and do your best to replicate a state in the application which reflects a real world scenario; the data will be more accurate from a practicality standpoint and will seem more familiar to your user.

Happy testing!

Share the knowledge:
  • Facebook
  • Twitter
  • LinkedIn
  • Reddit
  • StumbleUpon
  • email
If you're interested in reading more articles about international business, project management, language and culture, why not visit the Facebook page, follow me on Twitter, or circle me on Google+?

2 Responses to “Usability testing and its two evil brothers”

  1. Great topic; the motives behind requesting a usability test can either lead to effective solutions or bastardize the process altogether.

    Regarding the use of testing to prove to non-believers… I thought hearing how this motive is seen from the designer’s angle was enlightening and makes sense. I can only attest to the merits of this from the perspective of the usability analyst, when met with resistance as to why our practice is necessary. Most of the time, there’s nothing to be said after a skeptic observes a user’s behavior as they make their way through a process to complete a task. The value in testing should speak for itself. The design either works or it doesn’t. But like you said, if you are merely testing because you question the design to begin with… what are you really looking to glean from testing at that point?

    Well stated on all counts!

  2. Thanks Shannon G.! Glad you liked it.
    I should have noted in the article that when the last of the “evil brothers” occurs, most often it’s because there isn’t a third party, such as a usability engineer, constructing and running the test- as they typically have no conflicting motivation.

Leave a comment:

4 − 2 =

PMP Exam Prep:  Seventh Edition

About the Author

Website: Aaron Pearlman
Aaron is an Interaction Designer with industry experience ranging from developing rich web applications to leading and managing agile teams in an enterprise development environment. He has a MSI in Human-Computer Interaction focused on conceptual design and decision making through information visualization with areas of interest extending out to: demonstrating client objectives through UX, usability and goal centered design. When he's not being a design geek, you can find Aaron playing the guitar, reading, spending money on a Wii game, or at the beach. Follow him on Twitter at