Skip to content

Dispelling Automated Testing Myths

By Vinny Vallarine—It seems those of us in the software community fall into one of two camps with regard to automated testing.  On one side, there are those of us who are fully engaged.  We’ve embraced the concept of test automation, truly believe in it, and have successfully integrated its methodologies into our organizations.  I like to say we have hit the believe button! On the other hand, there are those who simply don’t believe automated testing can apply to their product or domain.  There are a variety of reasons for this mindset, but overall, I’ve noticed that the non-believers among us feel this way as a result of several automated testing myths.  Let’s address a few of these here.

Myth #1:  My system is much too complex for automation.

A common misconception is that high system complexity fully precludes the efficacy of automated software testing.  While a system’s degree of complexity can certainly pose challenges to the approaches taken with automation, complexity in and of itself does not present a significant barrier to automation.  In fact, in our experience, we’ve learned that as system complexity increases, the need for test automation increases with it.

Actually, as system complexity increases, test automation tends to offer a higher return on investment (ROI) than it would on less complex systems.  This is true for various reasons, not the least of which is the fact that manual testing becomes less and less efficient as systems become more and more complex.   To this end, however, complex system testing requires automated solutions capable of thriving in the realm of the complicated and intricate.

Myth #2: Automated Tests aren’t robust enough to test my system.

This concern derives from the idea that automated testing consists of basic test steps stimulating the system in a very limited, linear, and sequential fashion.   Good software practices have pretty much always included a stage in which developers write drivers to ‘stimulate’ their software and ‘stubs’ in which to receive or ‘listen’ for their program’s response.   Dedicated test automation efforts are often perceived as simply comprising these simpler artifacts, and, thus, falsely becomes associated with this much smaller scoped solution.   It’s this misconception that paints an inaccurate image of what the field of test automation really is.

Test automation has evolved into a discipline in and of itself consisting of unique methodologies, processes, and standards.  Software test automation is not simply a series of basic test steps but rather a complex and layered software solution to both stimulate and perceive an enormous quantity of system data.  We have been able to successfully apply this approach to some of the DoD’s most complex systems.

Myth #3: We already test enough!

What ‘enough’ means can certainly be debated, but it’s fair to say that an organization can always improve upon some element of their testing.  Whether it’s increased requirements coverage, testing earlier in the development process, or increasing the amount of test data ‘driving’ the system under test, there are always areas to improve upon within one’s test program.

When an organization moves into the realm of true test automation, their former idea of what is enough suddenly seems enormously insufficient.  The fact of the matter is that automation brings opportunities that are simply not possible when testing manually.  Think millions of test scenario combinations instead of dozens; think continuous testing 24 hours/day instead of test shifts; think complete repeatability instead of trying to remember what was done to expose the bug.

Utilizing its collection of robust, modular and configurable software testing technologies, Innovative Defense Technologies (IDT) has been continually effective in unseating the preceding myths.   Let us help you hit the believe button!

Vinny Vallarine is a Senior Software Engineer and Technical Lead at Innovative Defense Technologies (IDT) in Arlington, Virginia.   He has been a contributing member to the development of ATRT: Test Manager since its inception.