Different Software Quality Metrics used by Expert Test Managers
Article by: Kushal Kar & Swastika Nandi – Guest Publishers of the article.
Software Metric is a generic name for the measure of Quality of the Software Product. Software Metric can be a reflection on the status under software development cycle or some results etc.
A good project manager is the one who applies the principles of metrics to plan, organize & control the project deliverables in quantifiable / measurable terms.
Some of the software metrics extensively used by ISTQB certified expert testing managers are described below.
Sr. | Description of Metric | How to Measure (Formulae) |
1 | Test Coverage | Number of units (KLOC/FP) tested / total size of the system |
2 | Quality of Testing | No. of defects found during Testing/(No. of defects found during testing + No of acceptance defects found after delivery) *100 |
3 | Effort Variance | {(Actual Efforts-Estimated Efforts) / Estimated Efforts} *100 |
4 | Schedule Variance | {(Actual Duration – Estimated Duration)/Estimated Duration} *100 |
5 | Test Effectiveness | t/(t+UAT)
Explanation: Here “t” is the total number of defects reported during the testing, whereas UAT, means the total number of defects that are reported during the user acceptance testing. |
6 | Defect Density | No of Defects / Size (FP or KLOC)
Explanation: Here FP = Function Points |
7 | Weighted Defect Density | (5*Count of fatal defects)+(3*Count of Major defects)+(1*Count of minor defects)
Explanation: Here 5,3,1 corresponds to the severity of the detect |
8 | Schedule Slippage | (Actual End date – Estimated End date) / (Planned End Date � Planned Start Date) * 100 |
9 | Rework Effort Ratio | (Actual review effort spent in that particular phase / Total actual efforts spent in that phase) * 100 |
10 | Requirement Stability Index | {1 – (Total number of changes /number of initial requirements)} |
11 | Requirement Creep | (Total Number of requirements added/Number of initial requirements) * 100 |
12 | Correctness | Defects / KLOC or
Defects / Function points |
13 | Maintainability | MTTC (Mean time to change) — Once error is found, how much time it takes to fix it in production. |
14 | Integrity | Integrity = Summation [(1 – threat) X (1 – security)] |
15 | Usability | User questionnaire survey results will give an indication of the same.
Comment: How easy it�s for the users to use the system and how fast they are able to learn operating the system. |
16 | CSAT � Customer Satisfaction Index | Call volume to customer service hotline Availability (percentage of time a system is available, versus the time the system is needed to be available) |
17 | Reliability | # Mean time between failures (MTBF) – Total operating time divided by the number of failures. MTBF is the inverse of failure rate. # Mean time to repair (MTTR) – Total elapsed time from initial failure to the reinitiating of system status. Mean Time To Restore includes Mean Time To Repair (MTBF + MTTR = 1.) # Reliability ratio = (MTBF / MTTR)Explanation:Reliability is the probability that an item will perform a required function under stated conditions for a stated period of time. The probability of survival, R(t), plus the probability of failure, F(t), is always unity. Expressed as a formula : F(t) + R(t) = 1 or, F(t)=1 – R(t).# Mean time between failure (MTBF) # Mean time to repair (MTTR)# Reliability ratio (MTBF / MTTR) |
18 | Defect Ratios | # Defects found after product delivery per function point.
# Defects found after product delivery per LOC # Pre-delivery defects: annual post-delivery defects # Defects per function point of the system modifications |
19 | Number of Tests per Unit Size | Number of test cases per KLOC/FP |
20 | Acceptance Criteria Tested | Acceptance criteria tested / total acceptance criteria |
21 | Defects Per Size | Defects detected / system size |
22 | Testing Cost | Cost of testing / total cost *100 |
23 | Cost to locate defect | Cost of testing / the number of defects located |
24 | Achieving Budget | Actual cost of testing / Budgeted cost of testing |
25 | Defects detected in testing | Defects detected in testing / total system defects |
26 | Defects detected in production | Defects detected in production/system size |
27 | Quality of Testing | No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100 |
28 | Effectiveness of testing to business | = Loss due to problems / total resources processed by the system |
29 | System complaints | Number of third party complaints / number of transactions processed |
30 | Scale of Ten | Assessment of testing by giving rating in scale of 1 to 10 |
31 | Source Code Analysis | Number of source code statements changed / total number of tests |
32 | Test Planning Productivity | No of Test cases designed / Actual Effort for Design and Documentation |
33 | Test Execution Productivity | No of Test cycles executed / Actual Effort for testing |
About the Authors: Kushal Kar & Swastika Nandi � QA Analysts, are the guest publishers of the article & are solely responsible for the ownership of its contents.
Full Study Material for ISTQB Advanced Test Manager Exam
An expert on R&D, Online Training and Publishing. He is M.Tech. (Honours) and is a part of the STG team since inception.