Software testing is a dynamic process. Especially on large, highly complex software systems, the volume of testing that can be conducted is seemingly endless. For testing to be productive, a test plan that includes quality guidelines must be defined. The plan should discuss a measureable way to know when testing is complete, also known as exit criteria. Automation facilitates several steps of this process.

Automated software testing tools generally have mechanisms that generate and maintain reports or logs of test pass/fail results each time a test is run. Since testing resources are finite, the test team must establish test completion criteria, which will define when the software has been adequately tested. If exit criteria are ambiguous, the test team will not know when the test effort is finished. Verification of the exit criteria can also be automated.

An example of exit criteria might include a statement that all defined test procedures based on requirements must be executed without any significant discrepancies and that any major defects must be fixed before release. This can be further qualified by levels and requirements verification.

Test results analysis can help to identify the defects that need to be fixed before production, versus those that can be addressed later in a release or patch. Additional metrics should be evaluated as part of the exit criteria. For example:

  • What is the rate of defects uncovered during regression testing following fixes?
  • How often do defect corrections fail the retest?
  • What is the average newly opened defect rate? Is it declining?

Developers should be made aware of all the testing criteria established by the test team. Not necessarily unique to one project or system, software quality benchmarks could be standardized within an organization and based on criteria established over the course of several projects.

Once the software build has met all of the exit criteria, it is important to conduct user acceptance testing, as it is important to remember that “software will only be as successful as it is useful to the customers.” (Dustin, 2009, p. 35)

Testing and analysis can consume up to half of a software development schedule. Factors like meeting all the exit criteria, passing regression tests and re-tests after fixes, and conducting user acceptance testing are time-consuming but essential. Automated software testing can speed these processes and enable testers to complete more tests and therefore increase software quality. Products like ATRT: Test Manager and ATRT: Analysis Manager empower testers with the latest testing technology and lead to better testing results. Contact IDT for more information on our patented ATRT technology and solutions.

Some information taken from:  Dustin, Elfriede, Thom Garrett, and Bernie Gauf. Implementing Automated Software Testing: How to save Time and Lower Costs While Raising Quality. Upper Saddle River, NJ: Addison-Wesley, 2009. This book was authored by three current IDT employees and is a comprehensive resource on AST.  Blog content may also reflect interviews with the authors.