Selection of Suitable Software Testing Technique after a Good Test Case Design
After spending several hours on perfecting our test case designs, a big question arises that is “Which testing technique should we use?” The best answer to that is: It depends!
There is no established consensus on which technique is the most effective. The choice depends on the circumstances, including the testers� experience and the nature of the object under testing.
With regard to the software testing engineer�s experience it is evident that a test case design technique that we as software testing engineers know well and have used many times on similar occasions is a good choice. All things being equal there is no need to throw old techniques overboard.
Despite the general feeling that everything is changing fast, techniques do not usually change overnight. On the other hand, we need to be aware of new research and new techniques, both in development and testing becoming available from time to time.
A little more external to the software testing engineers direct choice is the choice guided by risk analysis. Certain techniques are sufficient for low-risk products, whereas other techniques should be used for products or areas with a higher risk exposure. This is
especially the case when we are selecting between structural or white-box techniques.Even further away from the software testing engineers, the choice of test techniques may be dictated by customer requirements, typically formulated in the contract. There is a tendency for these constraints to be included in the contract for high-risk products. It may also be the case for development projects contracted between organizations with a higher level of maturity. In the case of test case techniques being stipulated in a contract, the test responsible should have had the possibility of suggesting and accepting the choices. Finally the choice of test case design techniques can be guided or even dictated by applicable regulatory standards.
Expert’s Advice on Choosing the Best Testing Techniques:
No firm research conclusions exist about the rank in effectiveness of the functional or black-box software testing techniques. Most of the research that has been performed is very academic and not terribly useful in “the real testing world.” One conclusion that seems to have been reached is: There is no “best” technique. The “best” depends on the nature of the product. We do, however, know with certainty that the usage of some technique is better than none, and that a combination of techniques is better than just one technique.
We also know that the use of techniques supports systematic and meticulous work and that techniques are good for finding possible failures. Using test case design techniques means that they may be repeated by others with approximately the same result, and that we are able to explain our test cases.
There is no excuse for not using some techniques.
Well known testing guru G J Meyers provides following strategies for applying techniques.
# Begin with cause & effect graphing technique when the specification happen to have combinations of input conditions
# Exploit boundary value analysis for input & output
# For both input as well as output, provide equivalence classes that are valid and invalid
# Round up using error guessing
# Add sufficient test cases using white-box techniques if completion criteria have not yet been reached (providing it is possible)
Research is being made into the effectiveness of different software testing techniques. Another software testing expert Reid has made a study on the techniques equivalence partitioning, boundary value analysis, and random testing on real avionics code. Based on all input for the techniques he concludes that boundary value analysis is most effective with 79% effectiveness, whereas equivalence partitioning only reached 33% effectiveness. Reid also concludes that some faults are difficult to find even with these techniques.