It has been said that quality is never an accident. We agree. It is the result of intelligent planning, hard work and thorough execution. Quality is not achieved by chance. In no area is this more  true than in software development.

Producing software that runs perfectly is almost impossible. But when it comes to building mission-critical systems, like the ones aboard military vessels, there is no room for error. A commitment to quality is essential. How is quality measured and what tools can help achieve it?

In brief, high-quality software must efficiently meet expected functionality and performance objectives. These objectives are measured through a variety of metrics and their associated tests. Some examples include:

  • Reliability models
  • Defect density rate tests
  • Mean time to failure (MTTF) metrics
  • Mean time to critical failure (MTTCF) metrics
  • Customer satisfaction indexes

The use of Automated Software Testing enables more testing to be completed and offers software developers and testers a powerful tool for their testing toolbox. Automated tests offer the following benefits:

  • Increased test coverage (more data variations and test scenarios)
  • More efficient use of test time (automated tests can run during off hours and are often faster than manual tests; testers can focus on other tasks)
  • More consistent tests
  • More easily replicated tests
  • Testing coverage in some areas that are almost impossible to test manually (performance tests, memory leak detection tests, concurrency tests, etc.)

When testing is more efficient, it either saves time or allows for more testing to be conducted in the same amount of time, enabling more software defects to be discovered and addressed. The question all developers and testers eventually have to ask and answer is, “How do we know when we are done testing?” Software reliability models provide a projection of how many defects remain undiscovered and can serve as a basis for answering this question. But the reality is that testing often stops when products need to be delivered to the customer.

Even without using software models, testers can use results from past deliveries to begin to analyze where problems will be found. Were any problems reported regarding functionality? Did any problems occur related to sequence-of-operator actions? Did any issues arise due to incompatibility with configurations?

Both the utilization of metrics and the review of past results can provide insights into the potential impact additional testing and the incorporation of Automated Software Testing would have on the final product. Taking all of these factors into consideration will inform wiser choices about software testing and enable higher quality to be achieved. It is worth repeating: quality is never an accident.

Some information taken from:  Dustin, Elfriede, Thom Garrett, and Bernie Gauf.  Implementing Automated Software Testing: How to Save Time and Lower Costs While Raising Quality. Upper Saddle River, NJ: Addison-Wesley, 2009. This book was authored by three current IDT employees and is a comprehensive resource on AST.  Blog content may also reflect interviews with the authors.