An Insight to Metrics Used during Software Validation Testing
Measures related to the software validation testing activity are essential for improving the effectiveness of this activity. A good estimate of the software validation-testing task increases the likelihood that the company�s product will be released on time and with an acceptable level of quality.
Activities of software testing for validation are critical to the successful launch of a new product. An effective software validation effort can lead to lower support costs, more satisfied customers, and more efficient use of scarce software engineering resources (as a result of fewer bugs, less time is required for bug fixing; therefore, more time is available to work on the next product).
On the other hand, an unsuccessful software testing effort aimed for its validation can result in the release of a product that has a significant number of bugs. Customers will be dissatisfied with the product (especially if a competitor�s product has significantly fewer bugs), and scarce software engineering and customer support resources will be spending most of their time fixing bugs and dealing with irritated customers.
Wherever companies have ISTQB certified experts like “Technical Test Analysts”,
they focus their attention on planning and managing the process of software testing cum validation activities to ensure that such activities remain as effective as possible. In any software development project, these experts gather answer to the following 8-questions on validation testing:
Q-1: How many tests do we need?
Q-2: What resources are needed to develop the required tests?
Q-3: How much time is required to execute the tests?
Q-4: How much time we need to find the bugs, for their fixation verification that these are successfully fixed?
Q-5: How much time we had actually spent in testing the product?
Q-6: How much amount of the code is being actually exercised?
Q-7: Are we testing all features of the product?
Q-8: How many defects we could find in every software baseline?
How many tests do we need?
This question can have a significant impact on the cost and schedule of any project. Experts view that, “While there is no magic way to select a sufficient set of practical tests in software testing effort, the objective is to test reasonably completely all valid classes for normal operation and to exhaustively test unusual behavior and illegal conditions”.
The question of how many tests are needed must be answered early on so adequate resources (people and equipment) can be arranged and accurate and realistic schedules developed.
The test estimate measure reflects the number of tests needed based on factors such as the following:
a) Features and functions defined in the SRS and related documents;
b) Act-Like-A-Customer (ALAC) testing;
c) Achieving a test coverage goal;
d) Achieving a software reliability goal.
Test estimates should be based on and tied to specific sections of the SRS and other related documents. Starting with the SRS, review each requirement and, based on past experience, estimate the number of tests needed to determine whether the software has met the requirement. In addition to tests that are tied directly to the SRS, we should also develop a reasonable number of ALAC tests that are representative of customer use of the product.
Act-like-a-customer (ALAC) software testing is a method in which tests are developed based on knowledge of how customers use the software product. ALAC tests are based on the principle that complex software products have many bugs. ALAC tests allow us to focus on finding those bugs that customers are most likely to find. Acting like a customer also means developing tests that:
a) Do it wrong;
b) Use wrong or illegal combinations of input;
c) Do not do enough;
d) Do nothing;
e) Do too much.
ALAC software testing method is generally used for validation testing, regression testing & functional acceptance testing to verify that the software meets the SRS.
The test estimate should also reflect the complexity of tests as well as manual versus automated tests.
Developing tests should be viewed as an investment. The time and effort required to identify, write, and debug a test can be more than recouped based on the costs required to find and fix bugs once a product has been released. Building up a large suite of good regression tests is like having money in the bank.
Like most estimating tasks, the first time we make a test estimate we may find that our estimate and the actual number of tests developed are very different. At the end of a project, we must do a postmortem and understand why there was a discrepancy. We must try to learn from past experience, and our estimates will continually get better.
The test estimate metric is measured in units that are the number of tests to be written. To improve our ability to accurately estimate the magnitude of the software validation-testing task, we use this measure to compare the estimated number of tests to the actual number written.
How much time we should spend on the test development?
Once we have made an estimate of the number of tests required, the next question comes up, “How much effort is needed to develop the required tests?”
The test development time in our software testing effort includes the time required to develop a first draft of a test, to debug the test, and to revise the test
The test development time metric should reflect the relative complexity of tests as well as manual versus automated tests.
The units of the test development time metric are person-hours per test. To improve our ability to accurately estimate the magnitude of the validation testing task, we use this measure to compare the estimated time required to develop, debug, and revise tests with the actual time required.
Once we have estimated the number of tests required and the test development time, we can then develop a realistic schedule for the software validation test development activity.
How much time we should spend on the test execution?
The test execution time metric is an estimate of the time required to execute the tests. The test execution time should reflect the complexity of the tests as well as manual versus automated tests. We can develop an average execution time for automated tests, manual tests, complex tests, and simple tests. We use these averages to determine the amount of time required to execute all the tests.
This estimate does not include time for regression testing required to verify bug fixes made during software validation testing. As a rule of thumb, we allow an additional 25% to 50% of the total test execution time for regression testing, depending on factors such as inspections held, amount of new code versus reused or modified code, amount of unit and integration testing performed, and so on.
The units of the test execution time metric are person-hours per test. To improve our ability to accurately estimate the magnitude of the validation-testing task, we use this measure to compare the estimated time required in executing the tests with the actual time required.
We use the test execution time estimate with the test estimate and the number of resources available (people and equipment) to develop the software validation testing schedule. Most important point to remember is that, we must allow adequate time for regression testing in our overall plan for software testing.
How much time is required to find, Fix & verify the bugs?
The find & fix cycle time measure includes the time required to perform the following activities:
1) Find a potential bug by executing a test;
2) Submit a problem report to the software-engineering group;
3) Investigate the problem report;
4) Determine corrective action;
5) Perform root cause analysis;
6) Test the correction locally;
7) Conduct a mini-code inspection on changed modules;
8) Incorporate corrective action into a new baseline;
9) Release new the baseline to the software validation team;
10) Perform regression testing to verify that the reported problem has been fixed and that the fix has not introduced new problems.
The units of the find / fix cycle time metric are person-hours per SPR. We use this measure to help justify increasing the amount of effort spent on prevention and detection activities and to compute the cost of quality. This measure represents activities that fall into the nonconformance category.
How much time has been spent actually using and testing the system?
This metric is a measure of cumulative testing time. Its units of measure are test hours. This measure is used to compute the growth of software reliability.