Metrics can aid in improving your organization’s automated software testing processes. If you can measure something, then you have something you can quantify. If you can quantify something, then you can explain it in more detail and know more about it. If you can explain it, then you have a better chance to attempt to improve upon it, and so on.

This benefit is typically realized over multiple test cycles and project cycles. Automated testing metrics can aid in making assessments as to whether progress, productivity and quality goals are being met.

What is a Metric?

In its most basic definition, a metric is a standard of measurement. It also can be described as a system of related measures that facilitates the quantification of some particular characteristic. For our purposes, a metric can be looked at as a measure which can be utilized to display past and present performance and/or used for predicting future performance.

What Are Automated Testing Metrics?

Automated testing metrics are metrics used to measure the performance (e.g. past, present, future) of the implemented automated software testing process.

What Makes A Good Automated Testing Metric?

As with any metrics, automated software testing metrics should have clearly defined goals of the automation effort. It serves no purpose to measure something for the sake of measuring. To be meaningful, it should be something that directly relates to the performance of the effort.

Prior to defining the automated testing metrics, there are metrics setting fundamentals you may want to review. Before measuring anything, set goals. What is it you are trying to accomplish? Goals are important, if you do not have goals, what is it that you are measuring? It is also important to continuously track and measure on an ongoing basis. Based on the metrics outcome, then you can decide if changes to deadlines, feature lists, process strategies, etc., need to be adjusted accordingly. As a step toward goal setting, there may be questions that need to be asked of the current state of affairs. Decide what questions can be asked to determine whether or not you are tracking towards the defined goals. For example:

  • How much time does it take to run the test plan?
  • How is test coverage defined (KLOC, FP, etc)?
  • How much time does it take to do data analysis?
  • How long does it take to build a scenario/driver?
  • How often do we run the test(s) selected?
  • How many permutations of the test(s) selected do we run?
  • How many people do we require to run the test(s) selected?
  • How much system time/lab time is required to run the test(s) selected?

In essence, a good automated testing metric has the following characteristics:

  • is objective
  • is measurable
  • is meaningful
  • has data that is easily gathered
  • can help identify areas of test automation improvement
  • is simple

A good metric is clear and not subjective. It adds meaning to the project. It should not take enormous effort and/or resources to obtain the data for a good metric. Lastly, it is simple to understand.  More to come on metrics in upcoming blog posts.