Four Best Policies for Regression Testing as Suggested by Software Testing Experts
The regression testing system is a part of an intermediate or expanded infrastructure. It consists of a suite of tests, developed gradually as the white and black box tests pass. Once in place, the regression software testing suites verify that implemented functionality continues to work properly after each new code addition to the application.
The following policies are formulated to ensure that the regression system is used effectively to identify all code regressions immediately on getting diagnosed and that the timely removal of regression defects is properly facilitated.
Policy – 1: The Regression system must be configured so that it provides detailed information on result
The regression software testing system should be configured to provide adequate information to the developer for reviewing the regression report and be able to investigate the problem without the need to execute any more tests. For example, for unit test regressions the results should indicate the type
of problem that occurred (unexpected outcome, uncaught runtime exception, etc.), the inputs responsible for uncovering the problem, the unit, & the stack trace. The stack trace is especially useful to identify defects causing abnormal program termination, as the exact instruction that caused the failure is displayed.In all cases, the regression software testing system results must provide more informative detail than the data tracked by the other components of the infrastructure, and the amount of information should be sufficient to identify the location and the cause of the regression.
Policy – 2: Regression Tests must be executed automatically after every build
The earlier that regression problems are detected, the faster, easier, and less costly it is to fix them. Therefore, the regression test execution should be integrated with the automated build system and executed immediately after each build. As a result of this approach, regression defects will be uncovered soon after they are introduced, and they can be fixed before they propagate to other parts of the code.
Policy – 3: Regression test results must be reviewed at the start of every working day and the test suite should be updated as required
Every failure in the regression test should be examined to find out whether the reported problem indicates a defect in the test suite or in the code itself. If the failure is due to some problem in the code, the developers must fix the code instantly, before taking up another work on some new functionality.
If the failure indicates a problem with a regression test, for example, as uncovered by a false positive result, the developers must fix the test instantly to ensure that false positive results are not reported in future, which in the longer run may lead to desensitization of the entire team for all defects that get reported. For example, let us say that a test case happened to fail due to an intentional change in functionality of the code, and the expected outcome of the test case also must have been modified. The concerned developers responsible for the particular test case must revise the test case with the revised outcome expected out of it.
This review process must take place at the start of every day of software testing activity. Otherwise, the team members lose valuable opportunities to learn from their own mistakes. If the team is not able to identify the problem & the developers and testers do not take immediate actions to prevent its reoccurrence, it is quite possible that developers will repeat the similar mistake in successive developments. This is due to the fact that had not been aware of the problem & / or due to the reason that the process has not been improved so far to prevent this type of mistake.
Policy – 4: Regression tests results must be used to assess the deployment readiness of the system
When the project comes close to release, the project manager & the architect must find out the ratio of regression test passes to the total number of regression tests executed that must be targeted to be achieved before the product can be considered ready for deployment.
For example, this ratio could be 95% if the architect has developed a good understating of the product and the test status, and estimates that approximately 5% of failures are due to false positive results reported out of the software testing effort.