An Insight to Usability Tests being the Most Indispensable in Software Testing
The satisfaction of a software product truly lies in is its capability to provide not only due satisfaction rather a sense of delight to the users. Hence usability attributes for any software product are reflected by its understandability, ease of learning, operability & ease of communication as practically realized by the users.
The approach to usability testing uses methodologies to gather data according to the groups of the end users who use the software product with an objective of doing certain tasks which represent the ultimate usage. Participants in the usability tests are carefully short listed being the true representative of the final group of users.
Six Elements of Usability Testing:
Usability testing has different elements when related to the interests of designer,
developer, & software testing engineers.
1) Development of a test objective: Applicable to designers & software testing engineers;
Download – ISTQB Foundation Level Exam CTFL |
2) Using representative sample of end users: Applicable to software testing engineers;
3) Building environment for the test representing the actual work environment: Applicable to designers & software testing engineers;
4) Documentation of observations of the users who either review or use a representation of the product: Applicable to developers & software testing engineers;
5) Collection, analysis, & summarization of qualitative & quantitative performance & preference measurement data: Applicable to designers, developers & software testing engineers;
6) Making recommendations for improvement in the software product: Applicable to designers & developers;
There are four types of usability tests as described in the following table. These tests encompass the entire usability testing that can take place all cross the software life cycle
Sr. | Type of Usability Test | Stage at which Applicable |
1 | Exploratory Tests | Carried out between requirements & detailed design stage. |
2 | Assessment Tests | Carried out after the high-level design stage. |
3 | Validation Tests | When code / test is quite close to release |
4 | Comparison Tests | Applicable at almost every stage. Done along with other type of usability tests |
1) Exploratory Tests under Usability Testing:
Usability testing spreads all across the entire life cycle of the software product. Exploratory tests are carried out early in the life cycle between requirements & detailed design. A user profile & usage model should be developed in parallel with this activity. The objective of exploratory usability testing is to examine a high-level representation of the user interface to see if it characterizes the user’s mental model of the software. The results of these types of tests are of particular importance to designers who get early feedback on the appropriateness of the preliminary user interface design. More than one design approach can be presented via paper screens, early versions of the user manual, &/or prototypes with limited functionality.
Users may be asked to attempt to perform some simple tasks, or if it is too early in the prototyping or development process, then the users can “walkthrough” or review the product & answer questions about it in the presence of a tester. The users & testers interact. They may explore the product together; the user may be asked to evaluate the product. Users are usually asked for their input on how weak, unclear, & confusing areas can be improved. The data collected in this phase is more qualitative then quantitative.
2) Assessment Tests under Usability Testing:
Assessment tests are usually conducted after a high-level design for the software has been developed. Findings from the exploratory tests are expanded upon; details are filled in. For these types of tests a functioning prototype should be available, & testers should be able to evaluate how well a user is able to actually perform realistic tasks. More quantitative data is collected in this phase of testing then in the previous phase. Examples of useful quantitative data that can be collected are:
# Number of tasks corrected completed/unit time;
# Number of help references/unit time;
# Number of errors (& error type);
# Error recovery time;
Using this type of data, as well as questionnaire responses from the users, testers & designers gain insights into how well the usability goals as specified in the requirements have been addressed. The data can be used to identify weak areas in the design, & help designers to correct problems before major portions of the system are implemented.
3) Validation Tests under Usability Testing:
This type of usability testing usually occurs later in the software life cycle, close to release time, & is intended to certify the software’s usability. Key objective of validation usability testing remains to compare the product with some defined standards or benchmarks. Testers want to determine whether the software meets the standards prior to release; if it does not, the reasons for this need to be established.
Having a standard is a precondition for usability validation testing. The usability standards may come from internal sources. These are usually based on usability experiences with previous products. External standards may come from competitors’ products. Usability requirements should be set for each project, & these may be based on precious products, marketing surveys, &/or interviews with potential users. Usability requirements are usually expressed in quantitative terms as performance criteria. The performance criteria usually relate to how well, & how fast, a user can perform tasks using the software.
4) Comparison Tests under Usability Test:
Comparison tests may be conducted in any phase of the software life cycle in conjunction with other types of usability tests. This type of test uses a side-by-side approach, & can be used to compare two or more alternative interface designs, interface styles, & user manual designs. Early in the software life cycle comparison test is very useful for comparing various user interface design approaches to determine which will work best for the user population. Later it can be used at a more fine-grained level, for example, to determine which color combinations work best for interface components. Finally, comparison test can be used to evaluate how the organization’s software product compares to that of a competing system on the market.
Methodology of Designing Usability Tests and their Performance Measurements:
Tests designed to measure usability are in some ways more complex than those required for traditional software testing. With regard to usability tests, there are no simple inputs/output combinations that are of concern to the traditional tester. The approach to test design calls for the tester to present the user with a set of tasks to be performed. Therefore, knowledge of typical usage patterns for the software is necessary to plan the tests. Tasks that constitute a series of usability tests should be prioritized by frequency, criticality, & vulnerability (those tasks suspected before testing to be difficult, or poorly designed).
For example a usability test for a word processing program might consist of tasks such as:
# Open an existing document;
# Add text to the document;
# Modify the old text;
# Change the margins in selected sections;
# Change the font size in selected sections;
# Print the document;
# Save the document.
As the user performs these tasks, testers & video cameras keep a close watch on the process. Time periods for task completion & the performance of the system will be observed & recorded. Any errors made by the user will be noted. Time to recover from errors will be noted. Users’ comments as they work may be solicited & recorded. In addition, the video cameras can be used to record facial expressions & spoken comments, which may be very useful for evaluating the system. These observations, comments, & recordings are the outputs/results of the usability tests.
Many of the usability test results will recorded as subjective evaluations of the software. Users will be asked to complete questionnaires that state preferences & ranking with respect to features such as:
# Usefulness of the software;
# How well it met expectations;
# Ease of use;
# Ease of learning;
# Usefulness & availability of help facilities.
In comparison testing, participants may also be asked to rank:
# One prototype over another;
# The current software system versus a competitor’s;
# A new version of the software over the previous versions.
Usability testers also collect quantitative measures. For example:
# Time to complete each task;
# Time to access information in the user manual;
# Time to access information from on-line help;
# Number & %age of tasks completed correctly;
# Number or %age of tasks completed incorrectly;
# Time spent in communicating with help desk.
Testers can also count the number of:
# Errors made;
# Incorrect menu choices;
# User manual accesses;
# Help accesses;
# Time units spent in using help;
# Incorrect selections;
# Negative comments or gestures as captured by video.
These measures in conjunction with subjective data from user questionnaires, can be used to evaluate the software with respect to the four usability sub-factors: understandability, ease of learning, operability, & ease of communication. For example, time to complete a each task, number of user manual accesses, & time to access information in the user manual can be used to evaluate the sub-factors, understandability, & ease of learning. The number of incorrect selections, & the number of negative comments or gestures can be used to evaluate the operability & ease of communication of the software.
The raw data should be summarized & then analyzed. For performance data, such as task timings, common descriptive statistics should be calculated, for example, average, median, & standard deviation values. Usability testers need to identify & focus on those tasks that did not meet usability goals & those that presented difficulties to users. Problem areas are then prioritized so that the development team can first work on those deemed the most critical.
As a result of the usability tests, all the analyzed data need be used to make recommendations for actions. In this phase of usability testing designers with a knowledge of user-centered design, & human factors staff with knowledge of human – computer interaction can work as part of the recommendation team. A final report is developed & distributed to management & the technical staff who are involved in the project. In some cases the usability testing team may also make a presentation. When the project is complete, & the usability requirements are satisfied, the usability data is to be stored & used as benchmark data for subsequent releases & similar projects.
Usability testing is an important aspect of quality control. It is one of the procedures we can use as testers to evaluate our product to ensure that it meets the user requirements on a fundamental level. Setting up a usability program to implement all the types of the types of usability tests is costly. To support usability testing an organization must also be committed to include usability requirements in the requirements specification which is not always the case.
There are other approaches to usability evaluation that are less expensive, like preparing simple prototypes & questionnaires early in the life cycle for volunteer users & instrumenting the source code to collect usage information. Finally, each software product can be annotated with a “complaint” facility that allows users to provide feedback to developers about problems that occur. None of these approaches work as well as full usability testing; in many cases the data is collected after the software has been in operation & it is not possible to make changes or improve quality.
Many More Articles on Software Testing Approaches
An expert on R&D, Online Training and Publishing. He is M.Tech. (Honours) and is a part of the STG team since inception.
Billion Thanks for you guys, good work…. I Passed my ISTQB foundation level exam with 75%… Your study material and unique set of more than 1000 questions helps me a lot…. i solved only around 300 quest and cleared my exam…. Many Thanks to you……