Glossary of Terms beginning Alphabet-E
Efficiency is a quality attribute pointing towards the amount of computing resources and code required by the program to perform a particular function. It is the capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions.
Efficiency Testing is the process of testing to determine the efficiency of a software product.
Elementary Comparison Testing:
Elementary Comparison Testing is a black box test design techniques in which test cases
are designed to execute combinations of inputs using the concept of condition determination coverage.
Emulator is a device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
End-To-End Testing or E2E Testing:
End-To-End Testing or E2E Testing is quitesimilar to system testing but involves testing of the application in a environment that simulates the real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Even the transactions performed simulates the end users usage of the application.
Endurance Testing checks for memory leaks or other problems that may occur with prolonged execution.
Entrance Criteria refers to the desired conditions and standards for work product quality, which must be present or met for entry into the next stage of the software development process.
Entry Criteria is the set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria.
Entry Point is the first executable statement within a component.
Equivalence Class is a portion of a component’s input or output domains for which the component’s behaviour is assumed to be the same from the component’s specification.
Equivalence Partition is a portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.
Equivalence Partition Coverage:
Equivalence Partition Coverage refers to the percentage of equivalence partitions that have been exercised by a test suite.
Equivalence Partitioning is a software testing related technique, with two prime goals like 1) To reduce the number of test cases to a necessary minimum. 2) To select the right test cases to cover all possible scenarios. It is typically applied to the inputs of a tested component, although in rare cases equivalence partitioning is also applied to outputs of a software component. Equivalence Partitioning technique utilizes a subset of data which is representative of a larger class. Equivalence Partitioning is carried out as a substitute of doing exhaustive testing for each value of data in a larger class.
Error or Defect:
Error or Defect is a discrepancy between a computed, observed or measured value or condition as compared to the true, specified or theoretically correct value or condition. It can be human action, which resulted in software containing a fault (e.g. omission or misinterpretation of user requirements in the software specification, incorrect translation or omission of a requirement in the design specification).
Error Guessing is a software testing design technique based on the ability of the tester to draw on his past experience, knowledge and intuition to predict where bugs will be found in the software under test. Some areas to be guessed are: empty or null strings, zero instances, occurrences, blank or null characters in strings, Negative numbers. Software tester uses his judgement to select the test data for picking up the values, which seem likely to cause some defects.
Error Seeding is the process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects.
Error Tolerance is the ability of a system or component to continue normal operation despite the presence of erroneous inputs.
Exception Handling is the behavior of a component or system in response to erroneous input, from either a human user or from another component or system, or to an internal failure.
Executable Statement is a statement which, when compiled, is translated into object code, and which will be executed procedurally when the program is running and may perform an action on data.
A program element is said to be exercised by a test case when the input value causes the execution of that element, such as a statement, decision, or other structural element.
Exhaustive Testing refers to executing the program through all-possible combinations of input values and preconditions for an element of the software under test
Exit Criteria refers to standards for work product quality which blocks the promotion of incomplete or defective work products to subsequent stages of the software development process. Exit Criteria comprise of a set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used in the testing to report against and to plan when to stop testing.
Exit Point refers to the last executable statement within a component.
Expected Result refers to the behavior predicted by the specification, or another source, of the component or system under specified conditions.
Exploratory Testing is an approach in software testing which involves simultaneous learning, test design and test execution. It is a type of “Ad-hoc Testing”, but only difference is that in this case, the tester does not have much idea about the application & he explores the system in an attempt to learn the application and simultaneously test it. It is a creative & informal software testing aimed to find faults and defects driven by challenging assumptions. It is not based on formal test plans or test cases. It is a type of testing where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.
Explanation to All Alphabets in the Glossary of Terms