Design-Based Test Case Design an Effective Software Testing Technique
Software design errors and faults can be discovered and software designs validated by two techniques like:
1) Requirements-based test case design being the primary technique
2) Another important technique being the early design-based test case design.
In design-based test case design the information for deriving them is taken from the software design documentation.
Design-based test cases focus on the data and process paths within the software structures. Internal interfaces, complex paths or processes, worst-case scenarios, design risks and weak areas, etc. are all explored by constructing specialized test cases and analyzing how the design should handle them and whether it deals with them properly. In software testing effort, requirements-based and design-based test cases provide specific examples that can be used in design reviews or walkthroughs. Together they provide a comprehensive and rich resource for design based software testing.
Design Testing Metrics:
formal design reviews are adopting metrics as a means of quantifying test results and clearly defining expected results.The metrics (measures that are presumed to predict an aspect of software quality) vary greatly. Some are developed from scored questionnaires or checklists. For example, one group of questions may relate to design integrity and system security.
Typical Integrity Questions can be like the following
Q.1: Are security features controlled from independent modules?
Q.2: Is an audit trail of accesses maintained for review or investigation?
Q.3: Are passwords and access keywords blanked out?
Q.4: Does it require changes in multiple programs to defeat the access security?
Each reviewer would answer these questions, and their answers would be graded or scored. Over time, minimum scores are established and used as pass/ fail criteria for the integrity metric. Designs that score below the minimum are reworked and subjected to additional review testing before being accepted.
Another example of a metric-based design test that can be used effectively is a test for system maintainability. An important consideration in evaluating the quality of any proposed design is the ease with which it can be maintained or changed once the system becomes operational. Maintainability is largely a function of design. Problems or deficiencies that produce poor maintainability must be discovered during design reviews; it is usually too late to do anything to correct them further along in the cycle of software testing.
To test the maintainability we develop a list of likely or plausible requirements changes (perhaps in conjunction with the requirements review). Essentially, we want to describe in advance what about the system we perceive is most apt to be changed in the future. During the design review a sample of these likely changes is selected at random and the system alterations that would be required are walked through by the reviewers to establish estimates for how many programs and files or data elements would be affected and the number of program statements that would have to be added and changed. Metric values for these estimates are again set on the basis of past experience. Passing the test might require that 80 percent of the changes be accomplished by changes to single programs and that the average predicted effort for a change be less than one man-week. Designs that score below these criteria based on the simulated changes are returned, reworked, and re-subjected to the maintainability test before being accepted. This is just one example of an entire class of metrics that can be built around what-if questions and used to test any quality attribute of interest while the system is still being designed.
Design for Testing:
In addition to the testing activities we perform to review and test the design, another important consideration is the features in the design that simplify or support testing. Part of good engineering is building something in a way that simplifies the task of verifying that it is built properly. Hardware engineers routinely provide test points or probes to permit electronic circuits to be tested at intermediate stages. In the same way, complex software must be designed with “windows” or hooks to permit the testers to “see” how it operates and verify correct behavior.
Providing such windows and reviewing designs to ensure their testability is part of the overall goal of designing for testability. With complex designs, testing is simply not effective unless the software has been designed for testing. Testers must consider how they are going to test the system and what they will require early enough in the design process so that the test requirements can be met.
Design Testing Tools and Aids:
Automated tools and aids to support design testing play an important role in a number of organizations. As in requirements testing, our major testing technique is the formal review; however there is a greater opportunity to use automated aids in support of design reviews.
Software testing tools in common use include design simulators (such as the data base and response time simulators; system charters that diagram or represent system logic; consistency checkers that analyze decision tables representing design logic and determine if they are complete and consistent; and data base dictionaries and analyzers that record data element definitions and analyze each usage of data and report on where it is used and whether the routine inputs, uses, modifies, or outputs the data element.
None of these software testing tools performs direct testing. Instead, they serve to organize and index information about the system being designed so that it may be reviewed more thoroughly and effectively. In the case of the simulators, they permit simplified models to be represented and experimentation to take place, which may be especially helpful in answering the question of whether the design solution is the right choice. All the tools assist in determining that the design is complete and will fulfill the stated requirements.
An expert on R&D, Online Training and Publishing. He is M.Tech. (Honours) and is a part of the STG team since inception.