A lot is demanded from technology and automation. When an organization evaluates Automated Software Testing for its test effort, it often has to work through some of the myths that can surround it. Otherwise, stakeholders may have unrealistic expectations and test engineers may give up on automation before it is successful. Some people have the notion that an automated test tool should be able to accomplish everything from test planning to test execution, without much manual intervention. During the transition to automation, it is important to manage expectations. Several common myths are outlined below.
Myth #1: Immediate Test Effort Reduction
Although one of the realistic goals of automation is a reduction of the test effort, this result will not be immediate. A learning curve will usually accompany the effective implementation of automation with a new project. If not previously used, automation introduces a new approach to the test program. It is more realistic for test engineers and stakeholders to expect a learning curve; it may take a while for testers to become efficient with a new tool.
Myth #2: Immediate Reduction in Schedule
Another automated test misconception is the expectation that the introduction of automation will immediately minimize the test schedule. As previously described, the testing effort will temporarily increase when automation is first introduced. So the anticipated reduction in schedule will not be instant. However, once an automated testing process has been established and effectively implemented, the project can expect to dramatically increase productivity.
Myth #3: No Training Required
A variety of test tools are marketed with a pitch that indicates they do not require training. While it is certainly true that the usability factor varies widely from tool to tool, testers should plan on some training and a bit of a learning curve, regardless of the new testing product being used. (At IDT, we believe our software testing solutions are extremely efficient and easy to use, but we would not deny a learning curve, even for our own suite of ATRT technologies.)
Myth #4: Universal Application of AST
While automation is a boon to a software testing program, there are some tests that may not be candidates for automation. When an automated GUI test tool is first introduced, it is beneficial to conduct some compatibility tests on the AUT to be sure all objects and third-party controls are recognized. Another simple example of a test that may not be a candidate for automation is the verification of a document print out. The goal for test procedure coverage is for each test to exercise multiple items while avoiding duplication of tests. The test team, possibly with the help of a software testing partner, should perform a careful analysis of the application when deciding which tests warrant automation and which should still be performed manually.
We believe best practices in software development include Automated Software Testing. For more information and realistic guidance on leading-edge software testing solutions, contact Innovative Defense Technologies (IDT).
Some information taken from: Dustin, Elfriede, Thom Garrett, and Bernie Gauf. Implementing Automated Software Testing: How to Save Time and Lower Costs While Raising Quality. Upper Saddle River, NJ: Addison-Wesley, 2009. This book was authored by three IDT employees and is a comprehensive resource on AST. Blog content may also reflect interviews with the authors.