Please note: This site's design is only visible in a graphical browser that supports Web standards, but its content is accessible to any browser or Internet device. To see this site as it was designed please upgrade to a Web standards compliant browser.
 
Signal vs. Noise

Our book:
Defensive Design for the Web: How To Improve Error Messages, Help, Forms, and Other Crisis Points
Available Now ($16.99)

Most Popular (last 15 days)
Looking for old posts?
37signals Mailing List

Subscribe to our free newsletter and receive updates on 37signals' latest projects, research, announcements, and more (about one email per month).

37signals Services
Syndicate
XML version (full posts)
Get Firefox!

Usability Myths?

27 Mar 2003 by

UIE says that certain usability myths need reality checks. They may be right, but I have a big problem with their reasoning. For example, for the “users give up because pages take too long to download” myth they say:

Testing shows no correlation between page download time and users giving up.

Here’s my problem: People aren’t very likely to give up if you’re watching over their shoulder. One of the biggest, often ignored issues with formal usability testing is realizing that people don’t like to fail in front of other people. Plus, people are usually a lot more patient in the presence of someone else than they might if they were alone. Finally, did people know they could “give up” while you were conducting your tests? What were their expectations and instructions?

Here’s another myth - users will leave a site if they don’t find what they want after three clicks. In fact, on every site we have tested in the last three years, it takes more than three clicks (except for featured content) to reach any content at all. Not a single user has left any of these sites within three clicks, and only a handful chose featured content links.

Again, did people know they could leave the site if they didn’t find the content within three clicks? Or, did they keep trying and trying so they didn’t fail in front of the testers? Remember, when you are user testing a site you are also testing the person performing the tests. When people are being watched and evaluated they’ll often go to great lengths to make sure they succeed. But, giving up after not being able to find something within a few clicks can make someone feel like a failure. Not being aware of this behavior can lead to inaccurate, skewed results. Skewed results makes it really difficult to debunk myths. Your thoughts?

26 comments so far (Post a Comment)

28 Mar 2003 | p8 said...

I agree with the overal idea of the article. Some of these guidelines seem like they are laws of nature. This can stop true innovation. Guidelines should always be questioned.

For example: Jarod Lanier talks in an interesting article about creating big programs in a reliable way and fundamental changes that are needed in software development.

"When you go to school and learn how to program, you are taught about an idea like a computer file as if it were some law of nature. But files used to be controversial. The first version of the Macintosh before it was released didn't have files. Instead, they had the idea of a giant global soup of little tiny primitives like letters. There were never going to be files, because that way, you wouldn't have incompatible file formats."

28 Mar 2003 | p8 said...

I agree that - "users will leave a site if they don't find what they want after three clicks" - is a myth. I sometimes leave a page if I don't see what I want after zero, one or two clicks.

28 Mar 2003 | p8 said...

These myths are created when guidelines are taken out of context. Like the author's example: 5 to 8 persons for testing might not be appropiate in all cases.

For example:
"You have undoubtedly heard that users give up because pages take too long to download.
... Of course, we forget about the delicious impatience of waiting for something good, like Christmas Morning, the Super Bowl, or the next Lord of the Rings movie. "

So what is the author's advice? Can we make websites as huge as we want? 2MB of flash per page? The users would just love the delicious impatience.

I think the guideline of making pages small is good in most cases but there are exceptions like some portfolio sites or movie trailers (you wouldn't want to see a movie trailer in a 100x75 format).

28 Mar 2003 | p8 said...

I agree the author should be more critical of his test conditions. Test conditions can define the test results.
Scientist used to believe people need two eyes to see depth because they tested people with their heads fixed in position in rooms without texture. But in real life you can also see depth because of textures on objects and moving your head (parallax-shift) amongst other things.

28 Mar 2003 | said...

Perhaps it is the rather simple-minded search for rules as replacement for expert reasoning skills (cost saving oversimplification).

28 Mar 2003 | fajalar said...

p8's being prolific today...

Here's a link to a site that has a sample size calculator. You can use the calculator to see how many test subjects (usability engineer=evil scientist) you will need to be 95% or 99% confident that you have found what you are looking for in the test (+/- a standard deviation (confidence interval)).

The down side is that if your user population is 1000 people, and you want to be 95% sure you have found usability issues (not problems), and you want a standard deviation of only 1, you will need to test 906 people.

Anybody here ever get to test 906 people?

Also, I am trying to find the paper, but I read somethng last year that tested user wait times during the downloads of sites. It essentially said that users will wait quite a while (up to 10 minutes in one case IIRC) if they think that what is downloading is "hard for the computer." And they are willing to wait far beyond the 250ms, and without frustration, if you put a progress feedback indicator that shows movement, and related to the task (ie. "Searching..." and animating the ellipsis(sp)).

UIE doesn't like to put out statistics. And this is the thing that bothers/amuses me the most about usability/UCD: No one can seem to agree on anything that we do.

And it is not just the usability participants apparently. Here's a paper on The Evaluator Effect (pdf) which looks at what the usability evaluator brings to and takes away from a test. Just more validation that none of us can agree on what is a usability issue.

28 Mar 2003 | Don Schenck said...

One problem I see a lot: People try to "nail it" on the first attempt. Folks, do you best, do what you believe is adequate testing, then put it out there and _refine_ it. Nothing wrong with that.

Just my old guy common sense speaking there. I'll now go back to my square foot garden.

28 Mar 2003 | fajalar said...

I agree, Don. Because you will get real feedback when people are using it for real. You will get somewhat real feedback in a lab or informal usability setting.

Trouble is, by the time the product has been released, I am usually on 2 more projects and don't have time to go out and test. I have been trying to build in usability during pilot phases of implementation, but it is not received well because it takes time away from "the real work."

And thanks for the gardening link (again:). I sent it to my wife and she likes it. And will soon be rototilling half our yard to plant a garden. I am going to pave the other half, buy an SUV, and leave it there running 24/7. Plants need carbon monoxide, right?

28 Mar 2003 | Don Schenck said...

Fajalar! NOOOO!!! Don't till; dig your square foot garden by hand. It's like Biodynamic French Intensive Gardening, and square foot gardening rules.

Serious.

28 Mar 2003 | Darrel said...

I think this is the year of our square foot garden. Is it like container gardening?

28 Mar 2003 | Nick said...

One of my favorite kooky papers on the topic is a piece from 1998 that actually concludes longer download times are better because they provide a sense of anticipation for users which will cause them to more actively browse later on.

Talk about far-from-normal experimental conditions. They had users stare at one of two versions of an animated gif for 20 seconds, and then set them loose on CNN. One of the gifs "animated" itself to look like it was taking a long time to download, the other was static. After reaching CNN they discovered the people who had to watch the "slow loading image" browsed "more actively" than their static peers. Their conclusion is based on the fact that the "more active" users made more "clicks" on CNN's page during the subsequent time period (10 minutes I think?) after viewing the image.

Clicks?

How can making more clicks be a guaranteed good thing? If I'm engaged with content, I click very little. If I'm bored I click a lot. Conversely, if I'm searching through a site I'm engaged with I might make a lot of clicks. My point is clicks alone are a near-meaningless measure of success for usability.

Anyway, these researchers released a part II in 2001 that continues exploring what they've oh-so-properly dubbed the "Tease Effect."

I haven't read it, but I can only imagine...

28 Mar 2003 | p8 said...

Nick said: "After reaching CNN they discovered the people who had to watch the "slow loading image" browsed "more actively" than their static peers."

Probably because they expected something spectular after the slow loading image but couldn't find it.

28 Mar 2003 | pb said...

Wow, what a lousy article. Surely these are two items that couldn't ever be tested in a lab.

28 Mar 2003 | Scott M. said...

You're absolutely correct, JF. And this is pretty lousy article.

I think usability testing can be exremely useful, but you've got to be testing what can be successfully evaluated in a given environment. Sure there are many myths, but it's interesting that the author chose to highlight experiences I don't believe can be successfully evaluated under test conditions.

Crap, I've got more to say, but a meeting calls....

30 Mar 2003 | Tim Parkin said...

If usability testing people, give them a task and a sample of similar websites, tell them they have to find results in 50% of websites over a set time limit. See how many skip to the next site when they get frustrated. Put your clients site at different positions in the list. You will not only get absolute results but also comparitive results with other (maybe competition) sites. This is how people browse, the have a time limit (I can only spend 10 minutes doing this, or My dinner is ready in 15 minutes) and they will look at a sample of sites (the first 5 in a search engine result). They will then jump from site to site ubtil they get a satisfactory answer (or two or three maybe). Don't stand over them, set up a screen recorder. Also, perhaps ask them to vocalise their frustrations and acheivments.

31 Mar 2003 | pb said...

"See how many skip to the next site when they get frustrated."

Zero. I've never, ever seen anyone *skip* a task in a testing situation.

31 Mar 2003 | JF said...

I've never, ever seen anyone *skip* a task in a testing situation.

That was my initial point. People want to succeed when they are being tested, not skip a task and been seen as giving up.

01 Apr 2003 | MadMan said...

Related WebWord discussion is here. I wrote so much there than I'd get tired just copying n' pasting it here. :)

02 Apr 2003 | Phil Murray said...

Regarding how long a person is willing to wait for a download: I'll wait a lot longer for a page that I consider to be of significant interest to me. It's really hard to measure that in a meaningful way in a testing situation.

02 Apr 2003 | JF said...

I think that's a great point Phil.

03 Apr 2003 | Dan Zlotnikov said...

How about this test method:
Take small groups of people and have them use a search engine to find something. How long do they wait before they give up on the search engine displaying the result?

Must make sure that:

a) No obvious bias for or against particular search engines exists (i.e, eliminate Google from the list completely)

b) A list of equivalent search engines is available and choice of which one they start from is randomized

To make sure they don't wait too long, assign a limited amount of time to complete the task.

To minimize the effect from researcher presence, I suggest covert observation.

Any obvious holes you can see in this?

21 Jan 2004 | casino4u said...

Great website, interesting read. Thanks!

21 Jan 2004 | casino said...

I found an excellent site.

21 Jan 2004 | slots said...

You can also play this wonderful game.

21 Jan 2004 | gambling said...

for a great adrenalin rush for all you

21 Jan 2004 | roulette said...

Great website, interesting read. Thanks!

Comments on this post are closed

 
Back to Top ^