İstanbul evden eve nakliyat Beylikd¨¹z¨¹ evden eve nakliyat Tuzla evden eve nakliyat
An insight to Concurrent Development-Validation Test Model
Delicious Bookmark this on Delicious
software testing genius

-->
An insight to Concurrent Development-Validation Test Model

Welcome to “Software Testing Genius”. Subscribe to my RSS feed for latest content on Software Testing.

An insight to Concurrent Development-Validation Test Model

Concurrent Development approach refers to the process of informal validation while development is still going on. This approach provides an opportunity for validation tests to be developed and debugged early in the software development process. It is helpful in providing an early feedback to the software engineers. The key benefit of this approach is that it results in formal validation being less eventful, since most of the problems have already been found and fixed.

Formal Validation Testing Review:

  • During informal validation developers are free to make any changes as felt necessary to comply with the requirements of the SRS.
  • During informal validation QA runs the tests and makes desired changes to comply with the requirements of the SRS.
  • During formal validation the only changes that can be made are bug fixes in response to bugs reported during formal validation testing. No new features can be added during this stage.

sans-serif">During formal validation the set of tests run during informal validation are run again. No new tests are added during this stage.

Entrance Criteria for Formal Validation Testing:

  • Software development must be completed by this stage.
  • The test plan must have been reviewed, approved and should be under the document control.
  • Necessary requirement inspection has been performed on the SRS.
  • Design inspections have been performed on the Software Design Descriptions - SDD.
  • Necessary code inspections have been performed on all "critical modules".
  • All test scripts have been completed and the software validation test procedure document has been reviewed, approved, and should be placed under document control.
  • Selected test scripts have been reviewed, approved and should be placed under document control.
  • All test scripts have been executed at least once.
  • CM tools should have been in place and all source code must be under configuration control.
  • Software problem reporting procedures must have been in place.
  • Validation testing completion criteria have been developed, reviewed, and approved.

Process of Formal Validation:

The same tests that were run during informal validation are executed again and the results are recorded. Software Problem Reports – SPR’s are documented for each & every test, which had failed. SPR tracking is performed. Status of all SPR’s is documented like Open, Fixed, Verified, Deferred or Not a Bug etc. etc. For each bug fixed, the SPR identifies the modules that were changed to fix the bug.

Baseline change assessment is used to ensure only modules that should have changed have changed and no new features have slipped in. Informal code reviews are selectively conducted on changed modules to ensure that new bugs are not being introduced. Time required to find and fix bugs is tracked.

Regression testing is performed with the help of guidelines like:

1) Use of complexity measures to help determine which modules may need additional testing

2) Use judgment to decide which tests to be run again.

3) Base decision on knowledge of software design and past history.

4) Track of test status i.e., passed, failed, or not run.

5) Recording of cumulative hours of actual testing time for tracking of reliability growth of the software.


Exit Criteria for Validation Testing:

  • All test scripts must have been executed for completion of this stage .
  • All SPR’s must have been satisfactorily resolved with either fixing of bugs or deferring some of them till later release. There should be unanimous agreement among all concerned on the resolution of SPR’s. This criterion could be further defined to state that all high-priority bugs must be fixed while lower-priority bugs can be handled on a case-by-case basis.
  • All changes made as a result of SPR’s must have been tested.
  • All related documentation like SRS, SDD & test documents must have been updated to reflect changes made during the validation testing.
  • All test reports must have been reviewed and approved.

Test Planning: Includes three elements like:

1) Test Plan – it defines the scope of the work to be performed. It provides detailed information like:

# How many tests are needed?

# How long will it take to develop such tests?

# How long will it take to execute such tests?


Most important topics addressed by a test plan are:

# Test estimation

# Test development and informal validation

# Validation readiness review and formal validation

# Test completion criteria


2) Test Procedure – is a document, which contains all of the individual test scripts which are to be executed. All the expected results are an integral part of each & every test script. The Test Procedure document should contain an unexecuted, clean copy of every test so that the tests may be more easily reused


3) Test Report – is a documents created as a result of running of the test scripts. It is a completed copy of each test script with full documentary evidence that the test was executed. Test report contains copy of each SPR indicating its resolution. It also contains a list of open or unresolved SPR’s. Test report contains details of Regression tests executed for each software baseline.


How do we decide the required number of test cases:

This activity is based upon:

1) Testing all functions and features in the SRS

2) Performing adequate number of customer like tests like:

# Do incorrectly

# Feed illegal combination of inputs

# Don’t do enough

# Do nothing

# Do too much

3) Achieving some test coverage goal

4) Achieving a software reliability goal


Important Considerations while deciding required number of test cases:

1) Based upon the test complexity – It is better to have many small tests that a few large ones.

2) Based upon different platforms – Due consideration is given as to whether our testing is to be modified for different platforms or operating systems etc.

3) Based upon Type of tests like Automated or Manual Tests – Do we have to develop automated tests. While deciding on the type of testing it is borne in mind that automated tests take more time to create initially but such tests once created properly, require very less manual intervention for execution.

Many More Articles on Software Development Models

Largest Database of Sample Papers - 1000+ Unique Questions for ISTQB Foundation Exam

ISTQB Foundation Exam - Full Crash Course for Download

ISTQB Advanced CTAL Test Analysts Exam - Full Crash Course for Download


ISTQB Advanced CTAL Test Manager Exam - Full Crash Course for Download


What Successful Testers say about the Quality of this website

If you want to keep track of further articles on Software Testing,
I suggest you to subscribe my
RSS feed
.

You can also Subscribe by E-mail
and get All New articles delivered directly to your Inbox.

Get your Absolutely Free Copy of Several MS PowerPoint Presentations & E-Books related to ISTQB, HP Load Runner, IBM RFT, HP QTP & QC Certification Exams, prepared by Popular Writers & Trainers, by writing to: Software.testing.genius@gmail.com

Full Study Material for Popular Certification Exams:

Study Material - HP QTP & QC Certification Exam

Study Material - IBM RFT Certification Exam

Study Material - HP LoadRunner Certification Exams for All Modules

Study Material - ISTQB Certification Exam

Most Popular Topics in Demand:

Practical Roadmap to QTP Certification

Practical Roadmap to CSTE Certification

Consolidated Study Material - Testing & QA

Rehearsal of QTP in 1 Hr. -  Interview Questions

 

Comments :

comments ↓


Leave Your Comments: (*) Marked Fields are Mandatory

You can apply basic formatting to the text

Name *
 
Email Address *
 
Website
 
Speak your mind
characters
sex hikayeleri