Automated Software Testing GearsBy Tony Reinert—As software products grow in both size and complexity, so does the effort required to test these products. During the product development lifecycle, testing is typically conducted at: unit, class, subsystem, system, and system-of-systems levels. Oftentimes, the responsibility to conduct this testing falls on multiple individuals or teams. For instance, developers may conduct the unit and class level tests, an integration team may conduct the system and subsystem level test, and finally a multi-product team may test at the system or system-of-systems level. Great care needs to be taken to coordinate these test efforts and prevent stove-piping of test planning. Automated testing solutions should also be explored.

Scalable Test Design

A scalable test is one that can be reused across multiple levels of testing. In order to design scalable tests, the test designer needs to first abstract the system behavior that is trying to be tested. That system behavior then needs to be exercised sufficiently for the given level of testing. This process is then repeated for each level of testing that needs to be performed. Once completed, a tester should be able to associate the testing of that behavior through every level of tests being conducted. This process results in several key benefits:

  • Traceability of associated tests needed to test a system behavior at any level of testing
  • Reuse of test artifacts such as: objectives, expected results, test parameters, and input/output verification data
  • Criticality and volatility metrics collection for a given system behavior. If tests are scaled all the way down to the class or unit level, then these metrics can be collected for source code as well.
  • Regression test identification as the result of a bug fix or software enhancement

Top-down or Bottom-up Approach

There is no single right answer when determining where to start when defining scalable tests. Top-down and bottom-up approaches will be discussed here, but these are by no means the only options available when selecting an approach. A test organization may want to select a hybrid approach, or perhaps even start in the middle by developing subsystem tests and scaling them in both directions.

A top-down approach involves defining behavior at the system or system-of-systems level first. Once this test is designed, then decompose this behavior into how the individual subsystems would behave to support this overall behavior. Next decompose the subsystems into how their classes would interact during this behavior, finally ending with defining the behavior of the classes and units of software. While you are decomposing this behavior, design a test that tests this behavior. Top-down approaches have their advantages when a product needs to meet predefined acceptance tests or in a test driven development environment.

Conversely, a bottom-up approach begins with the smallest unit of testing, typically the class or unit level test. The behavior being exercised by this unit or class is tested. These tests are then used as building blocks and can be pieced together cohesively to build subsystem tests. This process is then repeated for system and system-of-systems level tests. Bottom-up approaches have their advantages in new products where the tests are being developed as the product matures. For instance, initially you may only have class and unit level tests to work with since the subsystems are still being integrated together and a fully operational system is not available yet.

Applying Automated Testing

Deploying an automated solution across all levels can benefit product testing in numerous ways:

  • Decrease time to market for bug fixes and enhancements (Automated testing can run 24/7 and can complete the appropriate regression testing needed to test the updates to a software product.)
  • Produce consistent test result artifacts and test results, enabling easier access to trend analysis and test planning inputs
  • Reduce testing costs by requiring fewer testing resources

While there may be various automated testing tools on the market, a solution based on IDT’s Automated Test and ReTest (ATRT) technology can deploy automated testing to all levels of testing. It can run NUnit tests and record those results. Then it can stimulate subsystems by either message injection or user interface manipulation. Finally, it can either simulate an operator testing a system or monitor the manual testing of a system by a tester.

Tony Reinert is a senior software engineer at Innovative Defense Technologies (IDT). He is a contributing member of the ATRT: Test Manager and the ATRT: Analysis Manager development teams.