Approaches to Various Levels of Test Coverage used by Software Testing Practitioners
Coverage is a generic term referring to measurement and completion criteria in software testing. Test coverage is the degree, expressed as a percentage, to which the coverage items have been exercised by a particular test.
Various approaches to system level code coverage and code level coverage are being described below.
System Test Coverage Strategies:
Software testing engineers usually use following three approaches:
1) Major features first: Create tests which will exercise all the principal features first, to give maximum coverage. This will probably be the same as the regression test. Then exercise each feature in some depth.
2) Major use cases first: As major features. Requires that you know both the user profile and the use profile. The test must be end-to-end such that some real-world user objective is reached.
3) Major inputs and outputs: If the application is I/O dominated, then identify the most common kinds of inputs and outputs and create tests to exercise them.
These can be followed by:
1) All GUI features
2) All functions with variations
3) All input/output combinations
Remember that various test management tools will give you coverage metrics showing the proportion of requirements �covered� by a test, test cases run, etc. These are of course purely arbitrary; just because a tester has associated a test case to a requirement doesn�t mean that the requirement has been adequately covered. That�s one of the things you as test manager must check.
Strategies for Code Level Test Coverage Estimation
At the code level, test coverage is intimately related to test type, and there is no independent criterion against which the various types of test and hence the various types of coverage can be measured. In the test plan you need to establish the types of test and the coverage to be obtained therefrom. Criteria range from the common but essential all-statements-executed to the near-impossible all-paths-executed.
Before finding out the amount of code level test coverage, it is better to check & decide the following:
1) Whether any parts of the system have a high criticality? Remember that this excludes safety-critical parts, all of which must have the highest-possible level.
2) Which features and code areas are likely to be the most-highly used?
3) Which units are the most complex?
4) Which units have been changed the most often since being submitted to configuration management?
5) Which units have generated the most bugs so far? This will clearly change; if the test plan is being written at the high-level design stage, then, presumably, no units will have been coded so far; however, as the test plan is revised later in the project cycle, experience will suggest that some units are bug-prone.
6) What sort of inputs the system should withstand?
7) What the history of similar systems has suggested? This can be derived from bug reports; if they can be related either to features or particular parts of the code, it will be possible to build a code profile of the more problematic and use this as a guide to the type and quantity of tests to be run.
Having defined these issues it will be easier to relate the test types to the particular software bugs.