Please note: This site's design is only visible in a graphical browser that supports Web standards, but its content is accessible to any browser or Internet device. To see this site as it was designed please upgrade to a Web standards compliant browser.
 
Signal vs. Noise

Our book:
Defensive Design for the Web: How To Improve Error Messages, Help, Forms, and Other Crisis Points
Available Now ($16.99)

Most Popular (last 15 days)
Looking for old posts?
37signals Mailing List

Subscribe to our free newsletter and receive updates on 37signals' latest projects, research, announcements, and more (about one email per month).

37signals Services
Syndicate
XML version (full posts)
Get Firefox!

Usability Objectives

17 Jul 2003 by Matthew Oliphant

If you are going to test a design, you need to know the criteria to pass the test. Thus usability objectives. What I want to know is, how do you define usability objectives, and when do you define them? How do you document them?

I am required to state my objective like this: xx% (can’t say 100%) of users will be able to complete the application within x time, with x instances of assistance, and less than x errors (an error is defined as…).

Frankly, I can’t stand it. I want to know how many people can complete the task, then I want agreement on how many people we (the design team/client) are willing to let not be able to use it. That’s it.

15 comments so far (Post a Comment)

17 Jul 2003 | Mike said...

Sadly, the company I work for (which should be applauded, actually, for its pro-active stance and committment to usability) requires usability objectives to be set in terms of user satisfaction ratings.

So, after a usability test, we ask users completely non-reliable questions such as "How satisfied were you with creating new whatever with this tool?" and get answers on a 5 point scale from "Very Dissatisfied" to "Very Satisfied".

We then transform those datum into numbers, average them out, and report a satisfaction rating as a percentage. For example, we'll report that "Users reported 72% satisfaction with creating whatevers." Finally, we match that against a usability objective such as "Higher user satisfaction than previous version for creating whatever" or "Higher than 75% user satisfaction ..."

It's ugly, but it's what's required by the ISO audited process documentation, so we do it. Then we as usability developers ignore those metrics and focus more on the qualatative data and time-to-completion metrics.

17 Jul 2003 | Michael Spina said...

Usability tests around here are more informal. I'm the only person doing them (I basically convinced my boss to give me the time and resources to do so) and I conduct tests at various stages, some early when we know there will be extensive changes. At that point it's not pass or fail, but rather find out which problems need more attention, so objectives wouldn't help much.

Usability tests later in the game probably should have objectives, but right now I look at the results case-by-case.

17 Jul 2003 | jharr said...

Mike...
Soory you have to rely on survey data. In all of the testing I've been involved with, the survey is a "nice to have" but usually conflicts with the real test findings. You can have a test where a user fails every task but their survey will rate the application high. User perception is valuable, but not nearly as valuable as the data that comes from watching them complete or fail tasks.

17 Jul 2003 | Wilson Miner said...

not the worst possible scenario here, but far from optimal. it usually comes down either to my recommendation after filtering through the opinion of both the creative director and the client. which is pretty freaking arbitrary.

17 Jul 2003 | Mike said...

Almost all of the usability tests that I have conducted/designed have been centered around fixing a seriously ailing interface with completely obvious design and usability errors.

In short, all of the usability issues were basically fat watermelons being pitched to us, with no difficulty in hitting them outta' the park.

In order to conduct an efficient test, however, we defined various usability metrics and followed those to a T. These metrics are based on the criteria that Matthew posted in original thread, however, we don't rely on these statistics alone for our final recommendations.

This quantitative data is extremely valuable, but only tells half the story. We generate mounds of qualitative data as well from user surveys given post-test. These, mixed with the formal numerical data pulled from the test itself (using the metrics defined previously), is where we find our best data for the final deliverable at the end.

P.S. Just my two cents on the number scale for a user survey: a Likert scale is probably one of the most widely used scales in determining user satisfaction, but how do you convert this verbal measurement to a numerical one?

Generally, one would say that "1" is a Strong Negative, a "3" means a "Neutral" feeling, and "5" is a Strong Positive. However, some number bias is usually encountered when users want to say "4", however in their mind, they notice the "5" and are subconsciously attracted to the higher number, even when they honestly do not wish to circle it.

To defeat this problem, the conversion from a Likert scale to a numerical one should look something like this:

-2-1012

This should dissuade users from being biased regarding their number selection in a post-test survey. But, many usability pros argue that the negative sign in front of the "negative" response numbers also hits the user subsciously, but this comment is already too long so I'll save my $ .02 on that for some other time ;)

17 Jul 2003 | Mike said...

jharr,

I, too, often find that the survey data, especially for questions about the ambiguous internal state of "satisfaction", conflicts with the qualatative observations (and sometimes even the participants' comments.) People seem to be very forgiving when it comes to rating software interfaces, either because of the prevelance of Cooperian apologists or just the fact that the word "satisfaction" means so many different things.

For example, am I satisfied now that I'm done the task? Yeah, I guess so. Was it easy? Not really, no.

That's why while the user data is stored as internal company metrics, few of us take it all that seriously as compared to the comments by users, and less ambiguous metrics such as "Was the user able to complete the task without assistance?" and "How long did it take the user to complete the task?" or, my favourite, "How many attempts (ie: wrong paths) did the user take to complete the task?"

17 Jul 2003 | fajalar said...

In order to conduct an efficient test, however, we defined various usability metrics and followed those to a T.

That "usability metrics" link goes to Jakob's site. I searched thatbefore I posted this topic and got zero results. Course, I typed "Usability Objectives."

17 Jul 2003 | Mike said...

;)

I got the term "usability metrics" from his book Usability Engineering, so I figured there had to be *something* on his site about it ;)

17 Jul 2003 | Steve said...

The definition of the usability objectives obvioulsy needs to be dependent on what you're trying to accomplish. It's a safe assumption that most projects aren't going to be improving usability for the hell of it; there's a specific issue in mind. So, I tend to frame the test objectives around the original objectives of what we're trying to fix or improve.

For instance, one project had us redesigning the interface for the portal a large manufacturer used to communicate with their retailers. The existing interface was very cluttered, contained hundreds of links when most people used like 5-10, and just was very inefficient. So, since those were the issues we wanted to fix, those are the issues we tested. Time to complete certain tasks. Time to find new information. Time to find familiar information. Etc.

Of course you have to do this before you start putting your test together. In fact, I'd say setting your testing objectives is one fo the very first steps in your test prep.

18 Jul 2003 | fajalar said...

I think satisfaction data is very important, but (as others have said), only a piece of the total information.

We've used SUMI, and SUS to collect satisfaction data. We collect it at the end of each task and at the end of the session.

Coincidentally, Dr. Bob has a research review on the collection of participant feedback during and after the test. This link is also on WebWord.

18 Oct 2003 | Fenny said...

Man, I have taken a usability subject at school. Ever snice I have had the most boring days of my life. Man I can think Dr. Jakob Nielsen is the most boring person on the suface of this earth. It might be good stuff and do a lot of wonders to your business/software/website/users blah blah blah. But to me it is the most boring and painful thing I have ever done in my life. Sorry mates If i offend you, but I'm sure there are a lot of blokes out there who echo my sentiments.

Mate, I dont want to offend anyone, I just want to say it is ver very very very .............. boring

18 Oct 2003 | Fenny said...

Man, I have taken a usability subject at school. Ever snice I have had the most boring days of my life. Man I can think Dr. Jakob Nielsen is the most boring person on the suface of this earth. It might be good stuff and do a lot of wonders to your business/software/website/users blah blah blah. But to me it is the most boring and painful thing I have ever done in my life. Sorry mates If i offend you, but I'm sure there are a lot of blokes out there who echo my sentiments.

Mate, I dont want to offend anyone, I just want to say it is ver very very very .............. boring

18 Oct 2003 | Fenny said...

Man, I have taken a usability subject at school. Ever snice I have had the most boring days of my life. Man I can think Dr. Jakob Nielsen is the most boring person on the suface of this earth. It might be good stuff and do a lot of wonders to your business/software/website/users blah blah blah. But to me it is the most boring and painful thing I have ever done in my life. Sorry mates If i offend you, but I'm sure there are a lot of blokes out there who echo my sentiments.

Mate, I dont want to offend anyone, I just want to say it is ver very very very .............. boring

16 Jan 2004 | Sarah said...

If an application is designed well, the reward for users is that they will learn it faster, accomplish their daily tasks more easily, and have fewer questions for the help desk. As a developer of a well-designed application, your returns on that investment are more upgrade revenue, reduced tech support, better reviews, less documentation, and higher customer satisfaction. The rewards of building a good-looking Aqua application are worth taking the extra time.

31 Jan 2005 | compatelius said...

bocigalingus must be something funny.

Comments on this post are closed

 
Back to Top ^