Tests designed to measure usability are in some ways more complex than those required for traditional software testing. With regard to usability tests, there are no simple inputs/output combinations that are of concern to the traditional tester. The approach to test design calls for the tester to present the user with a set of tasks to be performed. Therefore, knowledge of typical usage patterns for the software is necessary to plan the tests. Tasks that constitute a series of usability tests should be prioritized by frequency, criticality, & vulnerability (those tasks suspected before testing to be difficult, or poorly designed).
For example a usability test for a word processing program might consist of tasks such as:
# Open an existing document;
# Add text to the document;
# Modify the old text;
# Change the margins in selected sections;
# Change the font size in selected sections;
# Print the document;
# Save the document.
As the user performs these tasks, testers & video cameras keep a close watch on the process. Time periods for task completion & the performance of the system will be observed & recorded. Any errors made by the user will be noted. Time to recover from errors will be noted. Users' comments as they work may be solicited & recorded. In addition, the video cameras can be used to record facial expressions & spoken comments, which may be very useful for evaluating the system. These observations, comments, & recordings are the outputs/results of the usability tests.
Many of the usability test results will recorded as subjective evaluations of the software. Users will be asked to complete questionnaires that state preferences & ranking with respect to features such as:
# Usefulness of the software;
# How well it met expectations;
# Ease of use;
# Ease of learning;
# Usefulness & availability of help facilities.
In comparison testing, participants may also be asked to rank:
# One prototype over another;
# The current software system versus a competitor's;
# A new version of the software over the previous versions.
Usability testers also collect quantitative measures. For example:
# Time to complete each task;
# Time to access information in the user manual;
# Time to access information from on-line help;
# Number & %age of tasks completed correctly;
# Number or %age of tasks completed incorrectly;
# Time spent in communicating with help desk.
Testers can also count the number of:
# Errors made;
# Incorrect menu choices;
# User manual accesses;
# Help accesses;
# Time units spent in using help;
# Incorrect selections;
# Negative comments or gestures as captured by video.
These measures in conjunction with subjective data from user questionnaires, can be used to evaluate the software with respect to the four usability sub-factors: understandability, ease of learning, operability, & ease of communication. For example, time to complete a each task, number of user manual accesses, & time to access information in the user manual can be used to evaluate the sub-factors, understandability, & ease of learning. The number of incorrect selections, & the number of negative comments or gestures can be used to evaluate the operability & ease of communication of the software.
The raw data should be summarized & then analyzed. For performance data, such as task timings, common descriptive statistics should be calculated, for example, average, median, & standard deviation values. Usability testers need to identify & focus on those tasks that did not meet usability goals & those that presented difficulties to users. Problem areas are then prioritized so that the development team can first work on those deemed the most critical.
As a result of the usability tests, all the analyzed data need be used to make recommendations for actions. In this phase of usability testing designers with a knowledge of user-centered design, & human factors staff with knowledge of human - computer interaction can work as part of the recommendation team. A final report is developed & distributed to management & the technical staff who are involved in the project. In some cases the usability testing team may also make a presentation. When the project is complete, & the usability requirements are satisfied, the usability data is to be stored & used as benchmark data for subsequent releases & similar projects.
Usability testing is an important aspect of quality control. It is one of the procedures we can use as testers to evaluate our product to ensure that it meets the user requirements on a fundamental level. Setting up a usability program to implement all the types of the types of usability tests is costly. To support usability testing an organization must also be committed to include usability requirements in the requirements specification which is not always the case.
There are other approaches to usability evaluation that are less expensive, like preparing simple prototypes & questionnaires early in the life cycle for volunteer users & instrumenting the source code to collect usage information. Finally, each software product can be annotated with a "complaint" facility that allows users to provide feedback to developers about problems that occur. None of these approaches work as well as full usability testing; in many cases the data is collected after the software has been in operation & it is not possible to make changes or improve quality.