Metrics are a useful tool for evaluating the health, quality, and progress of an automated software testing effort. Metrics can also be used to examine past performance, current status, and future trends. Without metrics, it would be almost impossible to quantify, explain or demonstrate quality.
Metrics also help demonstrate return on investment (ROI). As once quoted by a leading business leader and could be applied to software testing:
“What’s measured improves.” – Peter F. Drucker
In previous blogs, we have discussed several types of software testing metrics specific to automation. Some examples include: Percent Automatable, Automation Progress, and Percent of Automatable Test Coverage. There are also a few more common test metrics that do not necessarily apply to just automation, but are often associated with software testing in general. However, since they are useful in the broader spectrum of software testing, it is worth mentioning them here. The following general software testing metrics are divided into three categories:
- Coverage refers tomeaningful parameters for measuring test scope and success.
- Progress refers toparameters that help identify test progress to be matched against success criteria. Progress metrics are collected iteratively over time. They can be used to graph the process itself (e.g. time to fix defects, time to test, etc.).
- Quality refers to meaningful measures of excellence, worth, value, etc. of the testing product. It is difficult to measure quality directly; however, measuring the effects of quality is easier and possible.
The table below lists several additional common software testing metrics useful for the overall testing program. The table divides them into the categories listed above and provides a high-level description.
Common Software Testing Metrics*
|Test Coverage||Total number of test procedures/total number of test requirements. The Test Coverage metric will indicate planned test coverage.||Coverage|
|System Coverage Analysis||The System Coverage Analysis measures the amount of coverage at the system interface level.||Coverage|
|Test Procedure Execution Status||Executed number of test procedures/total number of test procedures. Test Procedure Execution metric will indicate the extent of the testing effort still outstanding.||Progress|
|Error Discovery Rate||Number total defects found/number of test procedures executed. The Error Discovery Rate metric uses the same calculation as the defect density metric. Metric used to analyze and support a rational product release decision||Progress|
|Defect Aging||Date Defect was opened versus date defect was fixed. Defect Aging metric provides an indication of turnaround of the defect.||Progress|
|Defect Fix Retest||Date defect was fixed & released in new build versus date defect was re-tested. The Defect Fix Retest metric provides an idea if the testing team is re-testing the fixes fast enough, in order to get an accurate progress metric||Progress|
|Current Quality Ratio||Number of test procedures successfully executed (without defects) versus the number of test procedures. Current Quality Ratio metric provides indications about the amount of functionality that has successfully been demonstrated.||Quality|
|Quality of Fixes||Number total defects reopened/total number of defects fixed. This Quality of Fixes metric will provide indications of development issues.||Quality|
|Quality of Fixes||Ratio of previously working functionality versus new errors introducedThe Quality of Fixes metric will keep track of how often previously working functionality was adversarial affected by software fixes.||Quality|
|Problem Reports||Number of Software Problem Reports broken down by priority. The Problem Reports Resolved measure counts the number of software problems reported, listed by priority.||Quality|
|Test Effectiveness||Test effectiveness needs to be assessed statistically to determine how well the test data has exposed defects contained in the product.||Quality|
|Test Efficiency||Number of test required / the number of system errors||Quality|
*Common Software Testing Metrics is adapted from Automated Software Testing,Addison-Wesley, 1999, Dustin, et al.)
To assure the success of an automated software testing program, the goals need to be not only defined, but constantly tracked. Good software testing metrics are important to that effort. They provide objective, measureable, meaningful, simple, and easily obtainable data. Carefully defined metrics can aid in improving your organization’s automated testing process and tracking its status.
For more information on using metrics as a measure of quality during software testing contact Innovative Defense Technologies (IDT), consult our previous blogs on this topic, or read the complete article, Useful Automated Software Testing Metrics.
Some information taken from: Dustin, Elfriede, Thom Garrett, and Bernie Gauf. Implementing Automated Software Testing: How to Save Time and Lower Costs While Raising Quality. Upper Saddle River, NJ: Addison-Wesley, 2009. This book was authored by three current IDT employees and is a comprehensive resource on AST. Blog content may also reflect interviews with the authors.