Skip to content

Automated Software Testing: Pitfalls to Avoid

While most software developers and testers understand the value of Automated Software Testing, not all have been able to implement it successfully. Several years ago, IDT conducted a survey of test engineers and received over 700 responses.[i] When asked why, in their experience, automated software testing was not being used, the largest percentage stated that it was due to a lack of resources. The survey also asked engineers why they thought automation sometimes fails. The most frequent response was a lack of time.

So while many agreed that, in general, Automated Software Testing was the best way to approach testing, several factors stood in the way of executing it successfully, including lack of budget, time, and experience. Additional reasons given for why automation fails included:

  • A general lack of focus on focus on software testing by the R&D
  • Misinterpretations and myths persist about AST
  • A lack of automated software testing processes
  • A lack of consideration of automation by the software developers
  • A lack of Automated Software Testing standards

In this and the next few blogs, each of these “pitfalls” will be discussed further.

R&D Does Not Generally Focus on Testing Efforts

Research and development (R&D) has been fueling high tech innovation for some time. The investment in new technologies has been a boon for the industry. However, testing has not always been part of the equation. Given the pace of development, testing must keep up with technology, or success is not guaranteed…even for the most innovative ideas. Consider the following:

1. Software development and testing are driving the business. If a new technology meets a business need, but isn’t tested thoroughly and implemented smoothly, it may be surpassed by another product. First to market with high quality is key.

2. Consider perceived versus actual quality. Defects that occur frequently and impact critical functionality are perceived by a customer as poor quality, even if additional less obvious defects also exist. Usage-based testing addresses the functionality issue and can improve perceived quality, resulting in more satisfied customers. Of course, in the end, both perceived and actual quality are significant.

3. Testing gets some of the blame.  Testing often gets blamed for missed deadlines, over-budget projects, and uncovered production defects. However, in our experience, the causes vary and often tend to be poor development practices and unrealistic deadlines, among other factors. Testing alone is not to blame.

4. Lack of system testing. Many engineers follow the best available software development processes and conduct thorough unit testing; however, there is still a lack of integration and system testing.

A renewed focus on testing during the R&D phase can empower developers with the resources to implement high-quality software that meets customer expectations and satisfies the end-user. We believe best practices in software development include Automated Software Testing. For more information on leading-edge software testing solutions, contact Innovative Defense Technologies (IDT).

Some information taken from:  Dustin, Elfriede, Thom Garrett, and Bernie Gauf.  Implementing Automated Software Testing: How to Save Time and Lower Costs While Raising Quality. Upper Saddle River, NJ: Addison-Wesley, 2009. This book was authored by three IDT employees and is a comprehensive resource on AST.  Blog content may also reflect interviews with the authors.

[i] Implementing Automated Software Testing, pp. 69-70.