No firm research conclusions exist about the rank in effectiveness of the functional or black-box software testing techniques. Most of the research that has been performed is very academic and not terribly useful in "the real testing world." One conclusion that seems to have been reached is: There is no "best" technique. The "best" depends on the nature of the product. We do, however, know with certainty that the usage of some technique is better than none, and that a combination of techniques is better than just one technique.
We also know that the use of techniques supports systematic and meticulous work and that techniques are good for finding possible failures. Using test case design techniques means that they may be repeated by others with approximately the same result, and that we are able to explain our test cases.
There is no excuse for not using some techniques.
Well known testing guru G J Meyers provides following strategies for applying techniques.
# Begin with cause & effect graphing technique when the specification happen to have combinations of input conditions
# Exploit boundary value analysis for input & output
# For both input as well as output, provide equivalence classes that are valid and invalid
# Round up using error guessing
# Add sufficient test cases using white-box techniques if completion criteria have not yet been reached (providing it is possible)
Research is being made into the effectiveness of different software testing techniques. Another software testing expert Reid has made a study on the techniques equivalence partitioning, boundary value analysis, and random testing on real avionics code. Based on all input for the techniques he concludes that boundary value analysis is most effective with 79% effectiveness, whereas equivalence partitioning only reached 33% effectiveness. Reid also concludes that some faults are difficult to find even with these techniques.
Many More Articles on Software Testing Approaches