Understanding the Elements of Software Testing!!!!!
What is Software Testing all about?
# “Testing is the process of executing a program with the sole motive of finding errors in it.”
# “Testing can show the presence of bugs but never their absence.”
What are the best practices of Software Testing?
# Prepare Good Test Cases: A good test case is one that has a high probability of detecting an undiscovered defect, not one that shows that the program works correctly.
# Avoid Testing Your Own Program: It is impossible to test your own program.
# Describe your Expectations Clearly in Test Case:
A necessary part of every test case is a description of the expected result.
# Avoid on-the-fly testing: Avoid any type of Testing which can not be reproduced.
# Write test cases for extreme Conditions: Good practice is to write test cases for valid as well as invalid input conditions.
# Rigorous inspection of Test results: Thoroughly inspect the results of each test .
# Be vigilant of birth of fresh defects: As the number of detected defects in a piece of software increases, the probability of the existence of more undetected defects also increases.
# Let the best people do the testing: Assign your best people to testing.
# Design the code with a motto of good testability: Ensure that testability is a key objective in your software design.
# Never temper with code for easy testability: Never alter the program to make testing easier.
# Define test objectives beforehand: Testing, like almost every other activity, must start with objectives.
What are the levels of Software Testing?
1) Unit Testing.
2) Integration Testing.
3) Validation Testing: Further comprises of
# Regression Testing.
# Alpha testing.
# Beta Testing.
4) Acceptance Testing.
What are the areas of focus of Unit Testing?
# Algorithms and logic.
# Data structures � Both global and local.
# Independent paths.
# Boundary conditions.
# Error handling.
Why there is great significance of Integration Testing?
1) One module can have an adverse effect on another.
2) Sub-functions, when combined, may not produce the desired major function.
3) Individually acceptable inaccuracies in calculations may be magnified to unacceptable levels.
4) Interfacing errors not detected in unit testing may come to surface.
5) Resource contention problems are not detectable by unit testing.
6) Timing Problems in real-time systems are not detectable in unit testing.
What is Top Down Integration in Testing?
1) The main control module is used as a driver, and stubs are substituted for all modules directly subordinate to the main module.
2) Depending on the integration approach selected: depth first or breadth first, subordinate stubs are replaced by modules one at a time.
3) Tests are run as each individual module is integrated.
4) On the successful completion of a set of tests, another stub is replaced with a real module.
5) Regression testing is performed to ensure that errors have not developed as result of integrating new modules.
What are the problems in Top-down Integration Testing?
1) Many times, calculations are performed in the modules at the bottom of the hierarchy.
2) Stubs usually do not pass data up to the higher modules.
3) Delaying testing until lower-level modules are ready usually results in integrating many modules at the same time rather than one at a time.
4) Developing stubs which would pass data up involves almost same amount of work as developing the actual module.
What is Bottom-up Integration in Testing?
1) Integration begins with the lowest-level modules, which are combined into clusters, or builds, that perform a specific software sub-function.
2) Drivers (control programs developed as stubs) are written to coordinate test case input and output.
3) The cluster is tested.
4) Drivers are removed and clusters are combined moving upward in the program structure.
What are the problems in Bottom-up Integration Testing?
1) The whole program does not exist until the last module is integrated.
2) Timing and resource contention problems are not found until late in the process.
What are the objectives of Validation Testing?
1) Determine if the software meets all of the requirements defined in the SRS.
2) Having written requirements is essential.
3) Regression testing is performed to determine if the software still meets all of its requirements in light of changes and modifications to the software.
4) Regression testing involves selectively repeating existing validation tests, not developing new tests.
What is the aim of Alpha & Beta Testing?
1) It�s best to provide customers with an outline of the things that you would like them to focus on and specific test scenarios for them to execute.
2) Provide with customers who are actively involved with a commitment to fix defects that they discover.
What is the aim of Acceptance Testing?
1) Similar to validation testing except that customers are present or directly involved.
2) Usually the tests are developed by the customer.
What are the various Test Methods?
1) White box or glass box testing.
2) Black box testing.
3) Top-down and bottom-up for performing incremental integration.
4) Most Natural – Act like a customer.
What are the various Testing Types?
1) Functional testing.
2) Algorithmic testing.
3) Positive testing.
4) Negative testing.
5) Usability testing.
6) Boundary testing.
7) Startup / shutdown testing.
8) Platform testing.
9) Load / stress testing.
An expert on R&D, Online Training and Publishing. He is M.Tech. (Honours) and is a part of the STG team since inception.