Glossary of Terms beginning Alphabet-M
Maintenance is the modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment.
Maintenance Testing refers to the testing of changes to an operational system or the impact of a changed environment to an operational system.
Maintainability is a quality attribute pointing towards an effort required in locating and fixing an error in an operational program.
Maintainability Testing is the process of testing to determine the maintainability of a software product.
Management Review refers to a systematic evaluation of software acquisition, supply, development, operation, or maintenance process, performed by or on behalf of management. It monitors progress, determines the status of plans and schedules, confirms requirements and their system allocation, or evaluates the effectiveness of management approaches to achieve fitness for purpose.
Measure of Completeness:
In software testing there are two measures of completeness, code coverage and path coverage. Code coverage is a white box testing technique to determine how much of a program�s source code has been tested. There are several fronts on which code coverage is measured. Code coverage provides a final layer of testing because it searches for the errors that were missed by the other test cases. Path coverage establishes whether every potential route through a segment of code has been executed and tested.
Memory Leak refers to a defect in a program�s dynamic store allocation logic that causes it to fail to reclaim memory after it has finished using it, eventually causing the program to fail due to lack of memory.
Metric is a mathematical number that shows a relationship between two variables. Whereas Software Metric Is a measure to quantify some property of a piece of software or its specifications may be status or results etc.. Since quantitative methods have proved so powerful in the other sciences, computer science practitioners and theoreticians have brought similar approaches to software development. Common software metrics are : Source lines of code, Cyclomatic complexity, Function point analysis, Bugs per line of code, Code coverage, Number of lines of customer requirements, Number of classes and interfaces, Cohesion, Coupling
Milestones are the intermediate points on the timeline in a project at which defined deliverables and results must be ready.
Moderator refers to the leader and main person responsible for an inspection or other review process.
Monitor is a software tool or hardware device that run concurrently with the component or system under test and supervises, records and/or analyses the behavior of the component or system.
Monkey Testing is a type of Unit testing which runs with no specific test in mind. It involves testing an Application on the fly, i.e. just few tests here and there to ensure that the application does not crash out.
Here the monkey is the producer of any input data (which can be either a file data or can be an input device data). For example, a monkey test can enter random strings into text boxes to ensure handling of all possible user input or provide garbage files to check for loading routines that have blind faith in their data. While doing monkey test we can press some keys randomly and check whether the software fails or not.
Multiple Condition Coverage:
Multiple Condition Coverage refers to the percentage of combinations of all single condition outcomes within one statement that have been exercised by a test suite. Multiple condition coverage reports whether every possible combination of Boolean sub-expressions occurs. 100% multiple condition coverage implies 100% condition determination coverage.
Multiple Condition Testing:
Multiple Condition Testing refers to a white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).
Mutation Testing is a method to find out if a set of test data or test cases is useful or not, by deliberately introducing various code changes (bugs) and re-testing with the original test data / test cases to determine if the ‘bugs’ get detected.
Explanation to All Alphabets in the Glossary of Terms