Skip to content

Refactor Less with Instant Feedback & Automated Testing

By Vinny Vallarine–

The further into production a software problem is found, the harder and more costly it is to fix, especially when large refactors are required to attain the proper solution.  In an ideal world, software developers writing small bits of functional code would get instant feedback and find out immediately if their code works, enabling code modifications early in the development process. Sound impossible? It’s actually quite feasible given the proper process and tools.  The automated software testing paradigm enables such a process.

Consider Combat Systems

Imagine the following scenario in the domain of Navy Combat Systems.  You’re coding a display module that needs to display and classify potential targets in a critical region of the battlefield.  Your task is to write the algorithms that classify those targets as friendly or hostile and determine the degree of the threat, given its speed, direction, and other characteristics.  The low-level algorithms you need to implement are most likely very complex from a mathematical perspective, but it’s the vast amount of input data required from the rest of the system that may pose the biggest testing challenge.  The data coming in may be difficult to simulate given the multitude of permutations and, because of this, a unit test level effort, while still a vital step in the process, only gets you so far.

A typical development-test cycle requires developers to have much of the module written before it ever gets tested system wide.   While low level unit tests can prove that discrete levels of logic are working properly, full system-wide execution is often needed to adequately test the module.   Only then, when replication of near-tactical system state is achieved, will the module truly be tested. This lends itself to the fact that engaging in system-wide testing earlier and more often will provide enhanced feedback, and, without a doubt, reduce how often code is rewritten eliminating much of the time spent on costly refactoring.

While system level testing earlier and more often is beneficial to any product’s development, achieving the near-tactical system state required for such tests is often the show-stopper to attaining this new level of testing.   These systems are often so complex that they require multiple, knowledgeable personnel actively driving the system to the correct state along with a multitude of simulators mimicking the other sub-systems of the tactical environment.   All of this is expensive, in terms of both dollars and time, and thus, is usually reserved for formal testing, farther down the production stream.  For this very reason, the system level testing process must utilize automation technologies.

Automated Software Testing Methodology

Committing to an Automated Software Testing methodology can help your organization with the problem of constant refactoring.  At IDT, we have developed Automated Test and Re-Test (ATRT), an innovative testing technology suite.  One of the key products in this suite, the  ATRT: Test Manager, enables the creation and maintenance of complex, system level Automated Test Cases with which to drive a system to its near tactical state, verifying critical behavior along the way.  Having sets of prebuilt automated test cases could relieve the burden of expense when trying to recreate the tactical environment.   If designed and maintained correctly, such automated test cases can be executed with a single button click.

Interested in learning more about ATRT? Click here to contact us for more information.

Vinny Vallarine is a Senior Software Engineer and Technical Lead at Innovative Defense Technologies (IDT) in Arlington, Virginia. He has been a contributing member to the development of ATRT: Test Manager since its inception.