Tag Archives: usability testing

Academic Papers eye tracking heat maps hot spot information design Information visualizations Mealographer media richness performance metrics reliability scanpath spatial maps tag clouds Usability user interface design validity

Formal usability testing with eye tracking – Mealographer

Usability Testing

Usability tests can be seen to fall into two general categories, based on their aim: tests which aim to find usability problems with a specific site, and tests which aim to prove or disprove a hypothesis. This test would fall into the former category. A search of the literature will reveal that tests looking to uncover specific usability problems often use a very small number of participants, coming from Nielsen’s (2000) conclusion that five users is enough to find 85 percent of all usability problems. Nielsen derived this formula from earlier work (Nielsen and Landauer, 1993). Although there is much disagreement (Spool and Schroeder, 2001), this rule of thumb has the advantage of fitting the time and money budget of many projects.

Use of Eye-Tracking Data

In terms of raw data, eye tracking produces an embarrassment of riches. A text export of one test of Mealographer yielded roughly 25 megabytes of data. There are a number of different ways eye tracking data can be interpreted, and the measures can be grouped into measures of search and measures of processing or concentration (Goldberg and Kotval, 1999):

Measures of search:

  • Scan path length and duration
  • Convex hull area, for example the size of a circle enclosing the scan path
  • Spatial density of the scan path.
  • Transition matrix, or the number of movements between two areas of interest
  • Number of saccades, or sizable eye movements between fixations
  • Saccadic amplitude

Measures of processing:

  • Number of Fixations
  • Fixation duration
  • Fixation/saccade ratio

In general, longer, less direct scan paths indicate poor representation (such as bad label text) and confusing layout, and a higher number of fixations and longer fixation duration may indicate that users are having a hard time extracting the information they need (Renshaw, Finlay, Tyfa, and Ward, 2004). Usability studies employing eye tracking data may employ measures that are context-independent such as fixations, fixation durations, total dwell times, and saccadic amplitudes as well as screen position-dependent measures such as dwell time within areas of interest (Goldberg, Stimson, Lewenstein, Scott, and Wichansky, 2002).

Because of the time frame of this investigation, the nature of the study tasks, and the researcher’s inexperience with eye tracking hardware and software, eye tracking data was compiled into “heat maps” based on the number and distribution of fixations. These heat maps are interpreted as a qualitative measure.

Continue reading

Notes: Web site usability, design, and performance metrics

Palmer, J.W. (2002). Web site usability, design, and performance metrics. Information Systems Research, 13(2), 151-167.

In this study Palmer looks at three different ways to measure web site design, usability and performance. Rather than testing specific sites or trying out specific design elements, this paper looks at the validity of the measurements themselves. Any metrics must exhibit at least construct validity and reliability—meaning that the metrics must measure what they say they measure, and they must continue to do so in other studies. Constructs measured included download delay, navigability, site content, interactivity, and responsiveness (to user questions). The key measures of the user’s success with the web site included frequency of use, user satisfaction, and intent to return. Three different methods were used: a jury; third-party rankings (via Alexa), and a software agent (WebL). The paper examine the results of three studies, one in 1997, on in 1999, and one in 2000, involving corporate web sites. The measures were found to be reliable, meaning jurors could answer a question the same way each time, and valid, in that different jurors and methods agreed on the answers to questions. In addition, the measures were found to be significant predictors of success.

This is an interesting article because in my experience, usability studies are often all over the place, with everything from cognitive psychology and physical ergonomics to studies of server logs to formal usability testing to “top ten usability tips” lists. Some of this can be attributed to the fact that it is a young field, and some of it is due to the different motive fueling research (commercial versus academic). One thing in the article I worry about, however, is any measure of “interactivity” as a whole. Interactivity is not a simple concept to control, and adding more interactivity is not always a good idea. Imagine a user trying to find the menu on a restaurant’s web site—do they want to be personally guided through it via an interactive Flash cartoon of the chef, or do they want to just see the menu? Palmer links interactivity to the theory of media richness, which has a whole body of research behind it that I am no expert on. But I would word my jury questionnaires to reflect a rating of appropriate interactivity.

The most important impact of this study is that it helps put usability studies on a more academically sound footing. It is very important to have evidence that you are measuring what you think you are measuring. It would be interesting to see if other studies have adopted these particular metrics because of the strong statistical evidence in this study.

The most straight-forward metric, download delay, is also one that has been discounted lately. The thought is that with so many users switching to broadband access, download speed is no longer the issue it used to be. This is especially false for sites with information seeking interfaces, which are often very dynamic and rely on database access. No amount of bandwidth will help if your site’s database server is overloaded.

Information visualizations and spatial maps on the web – Usability concerns

Visualizing the web

Although web technologies are constantly changing, most users still browse the web the same way they did back in 1995–typing keywords into search boxes, clicking from home page, to section, to subsection on a navigation bar, or following link, to link, to link. The fact that it is called a “web” suggests that there should be other ways of navigating websites, and there are a number of projects attempting to employ information visualizations and spatial maps to do so.

All web pages organize information visually, but “information visualization centers around helping people explore or explain data that is not inherently spatial, such as that from the domains of bioinformatics, data mining and databases, finance and commerce, telecommunications and networking, information retrieval from large text corpora, software, and computer-supported cooperative work.” (“InfoVis 2003 Symposium”) Spatial metaphors are used to communicate different levels of information. A simple, static example would be a personal homepage built to look like the designers home, with links to favorite movies in the living room and recipes in the kitchen. A more advanced example would be a customer relationship management system for a large company which instead of presenting a list of technical support problems and solutions, displays an interactive map of problems, with more common problems in a larger font size, and recent problems in red. In both cases, users get an immediate grasp of complex information.

Such visualizations are intended to help solve two current web usability problems: the lack of a wide view to web structure, and the lack of query refinement based on relationships of retrieved pages (Ohwada 548). But they must do so without creating additional usability barriers. This paper will describe three current information visualization projects and describe some of the usability issues these sorts of projects face.

Continue reading