Showing posts with label mobile testing. Show all posts
Showing posts with label mobile testing. Show all posts

Tuesday, July 07, 2015

Measures of User Experience

I was thinking today about my experience of advising clients wanting to do user testing of their new software or digital media. Typically, clients new to user testing don't know what they want to measure and once introduced to what can be measured they want to record it all.

Having conducted user testing sessions and analyzed the resulting data, I can assure you that more data is not necessarily (and even rarely) better. There are many measures and metrics that are irrelevant to your organization goals and the needs of your users.

Software has not been invented yet that can adequately record everything and there never seems enough people that can be hired to observe and record all that can be recorded. Besides, the sophisticated user testing software is expensive and having more than a couple observers during a user testing session often just makes the participant feel too much like a guinea pig (although a two way mirror to another room can help). And making someone analyze and report on quantitative data that is not necessary and will likely never be used is just a cruel and unusual  punishment.

My advice is to start with determining your goals and priorities first. Then review my list below of some common digital media metrics and measures to figure out which ones will most work for what you want to achieve. (If you are not familiar with the term - just google them as they are quite standard.)

Designers and developers are increasingly interested in measures related to how an application makes users feel. It is important when doing user testing to not think about everything in terms of efficiency. If a new website feature can be used quickly and easily, but makes us angry and never want to return - it has grandiosely failed at a primary goal. The difficulty is in measuring subjective phenomena is being sure that the operational definition used accurately captures the phenomenon. For instance, a user smiling during a test can mean that they are happy, but there are also perplexed smiles and polite smiles that people give to strangers for social niceties.

Usability or User Experience Measures

  • Task completion rate (also failure rate)
  • Task Completion time
  • Path analysis
  • Number of clicks to desired content
  • Number of times user clicked "Help" or "Search"
  • Number of times user asked facilitator for help
  • Error rate
  • Time spent on X (as a measure for "engagement")
  • Most used feature
  • Least used feature
  • Outcome based (e.g. if goal is to learn X)

Affective, Satisfaction, and Hedonic Measures

  • User reported
    • Task satisfaction rate
    • Application level satisfaction rate
    • Favourite feature
    • Least liked feature
  • Feelings observed or reported of 
    • Happy or pleased
    • Frustrated
    • Nostalgic
    • Angry or agitated
    • Sad
    • Confused
  • Social behaviours exhibited (e.g. number of times "shares" feature)

These are just a few of the various many measures and metrics that can be done and each one has its uses and inherit problems.  So heed my works about planning well before testing and really consider if the measure will give you meaningful, useful, and accurate data.

Let me know if I missed a particularly common or useful measure.

Thursday, December 20, 2012

People Should Still Not Have to Think!

I attended a talk recently by usability expert Steve Krug. His book written in 2000 Don't Make Me Think helped convince me in the early days of my Internet career of the importance of usability and the need to study it. So I totally geeked out when I had the opportunity after all these years later to hear him speak in person. I grabbed his book from my shelf, next to my other treasured classics such as by Nielsen and Holzschlag, and hoped to get an autograph.

Krug was sponsored by University of Toronto's Association for Information Systems, who very kindly squeezed me into their full event at UofT's Faculty of Information. His talk addressed the continued need for usability testing and recent developments that make it easier than ever to do so.

Despite the passage of time since publication of his landmark book, Krug still asserts that too much digital media design is not user-friendly and consequently "if you're not usability testing you must be nuts".

Krug noted that in the past usability testing was difficult and expensive, so there could be an excuse to not do it. Usability tests were conducted in speciality labs that had the ability to record testers and had a private room separated by one-way glass to allow developers/designers to observe unobtrusively. The labs and test experts were very expensive. Most often, labs were offsite. It was also difficult to recruit testers as they needed to physically be in lab.

But advances over recent years have made it easy and inexpensive to do usability testing. Now screensharing technology and software that records software usage is cheap and easy to use. So usability testing sessions can be set up pretty much anywhere and then broadcast to development teams located anywhere with an Internet connection. Remote testing is also an attractive option, Krug suggested, as it makes recruiting testers much easier and not much essential info is lost in the process.

To demonstrate the ease of doing such a test, Krug organized a testing session on the spot. He tested the mobile application Clear.

As usability testing should be done not for "statistical validity but actionable insight" the power of his impromptu test was immediately apparent as the tester reached roadblocks in her usage. The tester was requested to express her thought processes out loud (i.e. think aloud protocol) as she used Clean and was able to clearly articulate her problem.

Within moments, the tester provided evidence of problems and direction for changes. It wasn't complicated, expensive, or time-consuming but the input gained would dramatically improve the application (and likely make them more money).

Krug six maxims for usability testing
I condensed the maxims as follows:
  1. Do usability testing (with 3 people) every month.
  2. Start testing earlier in a project than you think (e.g. test prototypes or competitors products).
  3. Recruit loosely and grade on a curve (i.e. don't get so hung up about finding the ideal target user that you don't test as frequently).
  4. Make usability testing a spectator sport (i.e. invite as many people from the team to observe testing sessions together as "usability testing is the ultimate way to resolve debates around design issues").
  5. Prioritize findings - you'll uncover a lot of problems so identify the top three problem by participant.
  6. Tweaks are better than redesign.
Tips on testing mobile apps & sites
I asked Krug for some tips on testing mobile applications or sites. First, he noted that one can share their screen usage of a mobile device just as easily as a website (as witnessed by the Clear test session) so special video cameras to record mobile device usage are not necessary.

I also asked how one can overcome the difficulties of testing a mobile app or site in the context of use, particularly when the context is important - as with location based services. Krug offerred three points:
  1. How important is the context? Is it essential functionality? If not, testing in context may not be that crucial.
  2. How realistic does the context need to be, that is can it be simulated?
  3. Even if context is crucial, there is still tremendous value testing anywhere as problems will still be uncovered.
So the message is clear - just test. And do it frequently.

At the end of Krug's talk, I hesitantly took out my book to ask for his autograph. While waiting in line to talk to Krug, I noticed another person doing the same thing  - for the same reason. I'm clearly not the only person who has found his advice tremendously useful and influential.