Types of Software Testing
Software Testing Classification
The development process involves various types of testing. Each test type addresses a specific testing requirement. The most fundamental types of testing involved in the development process are:
- Unit Testing
- System Testing
- Integration Testing
- Functional Testing
- Performance Testing
- Beta Testing
- Acceptance Testing
The industry experts based upon the requirement have categorized many types of Software Testing. Following list presents a brief introduction to such types.
Acceptance Testing :
Is the best industry practice & its is the final testing based on specifications provided by the end-user or customer, or based on use by end-users/customers over some limited period of time. In theory when all the acceptance tests pass, it can be said that the project is done. More Details
Ad-hoc Testing:
Is a commonly used term for software testing performed without planning and documentation. It is a part of exploratory testing, being the least formal of test methods. It is generally criticized because it isn’t structured & the tester seeks to find bugs quickly with any means that seem appropriate. For Ad-hoc testing the testers possess significant understanding of the software before testing it. More Details
Alpha Testing:
Is simulated or actual operational testing by potential users / customers or an independent test team at the developers’ site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing. It is usually done when the development of the software product is nearing completion; minor design changes may still be made as a result of such testing.
Beta Testing:
Comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users. Thus beta testing is done by end-users or others, & not by the programmers or testers.
Black Box Testing:
Involves tests based upon specifications requirements and functionality. For Black Box testing, the software tester need not have any knowledge of internal design of the software or its code being tested. Due to this reason, the tester and the programmer can be independent of each other, avoiding programmer bias toward his own work.
During Black Box testing, the tester would only know the Legal inputs and what the expected outputs should be, but he need not know as to how the program actually arrives at those outputs. More Details
Build Verification Testing or BVT:
Build Verification Testing, also known as Build Acceptance Test, is a set of tests run on each new build of a product to verify that the build is testable before the build is released into the hands of the test team. The build acceptance test is generally a short set of tests, which exercises the mainstream functionality of the application software. Any build that fails the build verification test is rejected, and testing continues on the previous build (provided there has been at least one build that has passed the acceptance test). BVT is important because it lets developers know right away if there is a serious problem with the build, and they save the test team wasted time and frustration by avoiding test of an unstable build.
Client Server Testing:
Client/Server testing involves increased size, scope, and duration of the test effort itself. The necessary test phases include build acceptance testing, prototype testing, system reliability testing, multiple phases of regression testing, and beta, pilot, and field testing.
Comparison Testing:
Comparing software weaknesses and strengths with that of better or similar products from the competitors.
Compatibility Testing:
Is used to determine how well the software performs under different environment of varied types of hardware / software / operating system / network etc. Compatibility testing can be performed manually or can be driven by an automated functional or regression test suite. More Details
Configuration Testing:
Testing to determine how well the product works with a broad range of hardware / peripheral equipment configurations as well as on different operating systems and software.
Conformance Testing:
Is verifying implementation conformance to industry standards. Producing tests for the behavior of an implementation to be sure it provides the portability, interoperability, and/or compatibility a standard defines.
Concurrent Testing:
Is multi-user testing used to determine the effects of accessing the same application code, module or database records. It Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores etc.
Cross Browser Testing:
Is used to test an application with different browser & may be under different OS for usability testing & compatibility testing.
Dynamic Testing:
Is used to describe the testing of the dynamic behavior of the software code. It involves actual compilation & running of the software by giving input values and checking if the output is as expected. It is the validation portion of Verification and Validation. More Details
End-To-End Testing:
Similar to system testing but involves testing of the application in a environment that simulates the real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Even the transactions performed simulates the end users usage of the application.
More Details
Exploratory Testing:
It is a creative & informal software testing aimed to find faults and defects driven by challenging assumptions. It is not based on formal test plans or test cases. It is an approach in software testing with simultaneous learning, test design and test execution. While the software is being tested, the tester learns things that combined with experience and creativity generates new good tests to run. More Details
Functional Testing:
Validating an application or Web site conforms to its specifications and correctly performs all its required functions. This entails a series of tests which perform a feature by feature validation of behavior, using a wide range of normal and erroneous input data. This can involve testing of the product’s user interface, APIs, database management, security, installation, networking, etc Functional testing can be performed on an automated or manual basis using black box or white box methodologies. This is usually done by the testers.
Incremental Integration Testing:
Involves continuous testing of an application while new functionality is simultaneously added. It requires that various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed. This testing is done by programmers or by testers.
Install / Uninstall Testing:
Involves testing of full, partial, or upgrade install / uninstall processes.
Integration Testing:
Testing of the application after combining / integrating its various parts to find out if all parts function together correctly. The parts can be code modules, individual applications, client and server applications on a network, etc. It begins after two or more programs or application components have been successfully unit tested. This type of testing is especially relevant to client/server and distributed systems. More Details
Life Cycle Testing or V – Testing:
Life cycle testing or V testing aims at catching the defects as early as possible and thus reduces the cost of fixing them. It achieves this by continuously testing the system during all phases of the development process rather than just limiting testing to the last phase. A separate test team is formed in the beginning of the project. When the project starts both the system development process and system test process begins. The team, which is developing the system, begins the systems development process and the team, which is conducting the system test, begins the planning of system test process. Both system development team as well as the Test team start at the same point using the same information.
Load Testing:
Load testing is a generic term covering Performance Testing and Stress Testing. It is a test performed with an objective to determine the maximum sustainable load the system can handle. Load is varied from a minimum (zero) to the maximum level the system can sustain without running out of resources or having, transactions suffer excessive delay. More Details
Monkey Testing:
Is a type of Unit testing which runs with no specific test in mind. Here the monkey is the producer of any input data (which can be either a file data or can be an input device data). For example, a monkey test can enter random strings into text boxes to ensure handling of all possible user input or provide garbage files to check for loading routines that have blind faith in their data. While doing monkey test we can press some keys randomly and check whether the software fails or not.
Mutation Testing:
Is a method to find out if a set of test data or test cases is useful or not, by deliberately introducing various code changes (bugs) and re-testing with the original test data / test cases to determine if the bugs get detected. More Details
Negative Testing:
Testing the application for failure like conditions. It involves testing the tool with improper inputs. for example entering the special characters in place of a phone number
Performance Testing:
Performance testing can be applied to understand your application or WWW site’s scalability, or to benchmark the performance in an environment of third party products such as servers and middle-ware for potential purchase. This sort of testing is particularly useful to identify performance bottlenecks in high use applications. Performance testing generally involves an automated test suite as this allows easy simulation of a variety of normal, peak, and exceptional load conditions. It validates that both the online response time and batch run times meet the defined performance requirements.
Pilot Testing:
Testing which involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a Move-to-Production activity for ERP releases or a beta test for commercial products. Typically involves many users, is conducted over a short period of time and is tightly controlled.
Recovery Testing:
Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. More Details
Regression Testing:
Similar in scope to a functional test, a regression test allows a consistent, repeatable validation of each new release of a product or Web site. It involves repetition of the testing on a previously verified program or application & is aimed at ensuring that the reported product defects have been corrected for each new release and that no new quality problems were introduced in the maintenance process. Though regression testing can be performed manually an automated test suite is often used to reduce the time and resources needed to perform the required testing. More Details
Sanity Testing:
Is typically an initial testing effort to find out if the new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing the systems every 5 minutes, bogging down the systems to a crawl, or destroying the databases, then it can be concluded that the software may not be in a ‘sane’ enough condition to warrant further testing in its current state. More Details
Scalability Testing :
Scalability testing involves tests designed to prove that both the functionality and the performance of a system shall be capable to scale up to meet specified requirements of future. It is a part of series of non-functional tests. It is the testing of a software application for measuring its capability to scale up or scale out in terms of any of its non-functional capability – be it the user load supported, the number of transactions, the data volume etc. Scalability testing can be performed as a series of load tests with different hardware or software configurations keeping other settings of testing environment unchanged.
Security Testing :
Is used to determine how well the system protects against unauthorized internal or external access, willful damage, etc; & may require sophisticated testing techniques.More Details
Smoke Testing:
Is quick-and-dirty non-exhaustive software testing, ascertaining that the most crucial functions of the program work well, without getting bothered about finer details of it. Smoke Testing had originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire. The general term of smoke testing has come from leakage testing of sewers & drain lines involving blowing smoke into various parts of the sewer and drain lines to detect sources of unwanted leaks and sources of sewer odors. More Details
Static Testing:
Involves test activities performed without actually running the software. It includes code inspections, walkthroughs, and desk checks etc.
Stress Testing:
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. A graceful degradation under load leading to non-catastrophic failure is the desired result. It Involves subjecting a system to an unreasonable load while denying it the adequate resources needed to process that load. The resources can be RAM, disc space, mips & interrupts etc. etc. The idea is to stress a system to the breaking point in order to find bugs which will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to fail in a decent manner (e.g., failure without corrupting or losing data). In stress testing the load (incoming transaction stream) is often deliberately distorted so as to force the system into resource depletion. Stress Testing is often performed using the same process as Performance Testing but employing a very high level of simulated load. More Details
System Testing:
Falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic. It is conducted on a complete, integrated system to evaluate the system’s compliance with its specified requirements. System testing is a more limiting type of testing; it seeks to detect defects both within the Inter assemblages and also within the system as a whole. More Details
Unit Testing:
Unit is the smallest compilable component of the software. A unit typically is the work of one programmer. The unit is tested in isolation with the help of stubs or drivers. It is functional and reliability testing in an Engineering environment. Producing tests for the behavior of components of a product to ensure their correct behavior prior to system integration. Unit testing is typically done by the programmers and not by the testers. More Details
Usability Testing:
Involves testing the software for its user-friendliness. This is highly subjective, and will depend on the targeted end-user or the customer. User interviews, surveys, video recording of user sessions, and other techniques can be used for usability testing. Programmers and testers are usually not appropriate as usability testers. More Details
White Box Testing:
Involves tests based upon coverage of code statements, branches, paths & conditions. For White Box testing, it is essential that the software tester should have in-depth knowledge of internal logic of an application’s code. Here the software tester uses his explicit knowledge of the internal workings of the item being tested to select the test data & uses his specific knowledge of programming code to examine the outputs.
The white box test is accurate only if the tester knows what the program is supposed to do. He or she can then see if the program diverges from its intended goal. White box testing does not account for errors caused by omission, and all visible code must also be readable.
In White Box Testing the tester is expected to Check every line of the code. Software bugs are a fact of life. No matter how hard we try, the reality is that even the best programmers can’t write error-free code all the time.
For a complete software examination, it is essential that both white box and black box testing is carried out. More Details
Many More Articles on Tackling Global Recession
An expert on R&D, Online Training and Publishing. He is M.Tech. (Honours) and is a part of the STG team since inception.