sans-serif">During formal validation the set of tests run during informal validation are run again. No new tests are added during this stage.
Entrance Criteria for Formal Validation Testing:
Software development must be completed by this stage.
The test plan must have been reviewed, approved and should be under the document control.
Necessary requirement inspection has been performed on the SRS.
Design inspections have been performed on the Software Design Descriptions - SDD.
Necessary code inspections have been performed on all "critical modules".
All test scripts have been completed and the software validation test procedure document has been reviewed, approved, and should be placed under document control.
Selected test scripts have been reviewed, approved and should be placed under document control.
All test scripts have been executed at least once.
CM tools should have been in place and all source code must be under configuration control.
Software problem reporting procedures must have been in place.
Validation testing completion criteria have been developed, reviewed, and approved.
Process of Formal Validation:
The same tests that were run during informal validation are executed again and the results are recorded. Software Problem Reports Ė SPRís are documented for each & every test, which had failed. SPR tracking is performed. Status of all SPRís is documented like Open, Fixed, Verified, Deferred or Not a Bug etc. etc. For each bug fixed, the SPR identifies the modules that were changed to fix the bug.
Baseline change assessment is used to ensure only modules that should have changed have changed and no new features have slipped in. Informal code reviews are selectively conducted on changed modules to ensure that new bugs are not being introduced. Time required to find and fix bugs is tracked.
Regression testing is performed with the help of guidelines like:
1) Use of complexity measures to help determine which modules may need additional testing
2) Use judgment to decide which tests to be run again.
3) Base decision on knowledge of software design and past history.
4) Track of test status i.e., passed, failed, or not run.
5) Recording of cumulative hours of actual testing time for tracking of reliability growth of the software.
Exit Criteria for Validation Testing:
All test scripts must have been executed for completion of this stage .
All SPRís must have been satisfactorily resolved with either fixing of bugs or deferring some of them till later release. There should be unanimous agreement among all concerned on the resolution of SPRís. This criterion could be further defined to state that all high-priority bugs must be fixed while lower-priority bugs can be handled on a case-by-case basis.
All changes made as a result of SPRís must have been tested.
All related documentation like SRS, SDD & test documents must have been updated to reflect changes made during the validation testing.
All test reports must have been reviewed and approved.
Test Planning: Includes three elements like:
1) Test Plan Ė it defines the scope of the work to be performed. It provides detailed information like:
# How many tests are needed?
# How long will it take to develop such tests?
# How long will it take to execute such tests?
Most important topics addressed by a test plan are:
# Test estimation
# Test development and informal validation
# Validation readiness review and formal validation
# Test completion criteria
2) Test Procedure Ė is a document, which contains all of the individual test scripts which are to be executed. All the expected results are an integral part of each & every test script. The Test Procedure document should contain an unexecuted, clean copy of every test so that the tests may be more easily reused
3) Test Report Ė is a documents created as a result of running of the test scripts. It is a completed copy of each test script with full documentary evidence that the test was executed. Test report contains copy of each SPR indicating its resolution. It also contains a list of open or unresolved SPRís. Test report contains details of Regression tests executed for each software baseline.
How do we decide the required number of test cases:
This activity is based upon:
1) Testing all functions and features in the SRS
2) Performing adequate number of customer like tests like:
# Do incorrectly
# Feed illegal combination of inputs
# Donít do enough
# Do nothing
# Do too much
3) Achieving some test coverage goal
4) Achieving a software reliability goal
Important Considerations while deciding required number of test cases:
1) Based upon the test complexity Ė It is better to have many small tests that a few large ones.
2) Based upon different platforms Ė Due consideration is given as to whether our testing is to be modified for different platforms or operating systems etc.
3) Based upon Type of tests like Automated or Manual Tests Ė Do we have to develop automated tests. While deciding on the type of testing it is borne in mind that automated tests take more time to create initially but such tests once created properly, require very less manual intervention for execution.
Many More Articles on Software Development Models