Advanced Methods for Usability Testing

Today’s full-day workshop at UI11 Conference is an in-depth course by Rolf Molich on all aspects regarding usability testing. Naturally I can’t give all Rolfs tips and tricks; it’s his course after all. But I will share some of the highlights.

Comparative Usability Evaluation (CUE)

Over the years, Molich has conducted several Comparative Usability Evaluation studies, in which a number of teams evaluate the same website (or application) in a specific timeframe. The teams choose their own preferred methods (until this year, all teams chose either an expert review or a usability test). All submitted reports are evaluated and compared by a small group of experts.

The combined studies have revealed some really interesting patterns, like:

  • About 70% of all reported problems were reported by a single team only. And these were not just minor problems: about 12% of the exclusive reported problems were serious problems.
  • The teams who found the most usability problems used usability testing. However, expert reviews proved to be more productive (amount of reported issues per hour spent), see the graph below.
  • The teams who found the most usability problems (over 15 at an average of 9) needed much more time than the rest, see also the graph below. It would seem that finding the last few problems takes most of the time.

Discount usability testing

Molich was a little disappointed that only few of the attendees knew the term ‘discount usability testing’. It turns out that he and Jakob Nielsen coined the term in 1997 and that he is a fervent practitioner of this method.

Discount usability testing means that by leaving out a second facilitator and video recordings, you have more time left for testing more users. You will only have a busier job facilitating and writing down all observations. And you will also have to think more pragmatic: if you forget about problems that occurred once, they probably weren’t that serious. If it happens twice, you’ll remember. And if the user encounters a catastrophical problem, you will have enough time to write it down.

Of course video recording definitely has its strong points; for one thing, they prove to be very convincing in communicating the findings.

Some tips & tricks

  • Don’t test subtle and feature tasks before having tested all core and critical tasks.
  • Always avoid hidden clues in the tasks or scenarios you design.
  • Get buy-in from the stakeholders before conducting the test, to avoid criticism on the method you used afterwards. (This usually occurs when people don’t like the results)
  • Try using some open tasks (defined by the user).
  • Not only a website should be usable; make the usability report also useful and usable.
  • For each two negative findings, provide one positive finding.
  • Focus on productivity instead of quantity.

I have had a great experience this day. I have been able to learn from Rolf and the fellow attendees, to test my own skills (and fortunately didn’t disappoint myself) and to share my own experience. Rolf, thank you for two inspiring sessions.

See also: