Building Effective Automated Tests
Limiting the Scope of Automated Test Steps
The most important part of automated test design is limiting the scope of a specific automated test step. Automated test steps should only test a specific behavior, not an entire area of a system under test (SUT). These test steps should be further broken up into functions, which can be reused to test multiple behaviors.
For example, imagine you need to test functionality of a virtual vending machine. A poorly designed test step is below:
This test step tries to accomplish too much, and is not easily extendable. In this case the machine could be mixing up pennies and dimes and the operator would never know. In addition, the user would have to change the verification function (action 4) if they wanted to test additional coin types. Not only that, they would have to insert the coins in a specific part of the test step. This is confusing, because the verification is dependent on actions 1, 2, and 3 of the test step, and the order they execute in. Finally, the step does not reset the system to an initial state. This is not immediately evident since the test will run to completion successfully the first time, however if the tester were to run the test again, the verification would fail.
Developing More Focused Automated Test Steps
A more effectively designed alternative to the test step example above appears below:
This step accomplishes the verifications in a much cleaner way. It’s clear what each section’s responsibility is. It’s also much easier to extend the test; the user simply has to create another test step. The modularity encourages the operator to build a good test. The system state must be reset after each step in order for the next one to run correctly. This allows the user to repeat or reorder steps at any time without a redesign, and without the risk of breaking the other steps.
There are many more nuances to developing focused, maintainable tests, but targeting test steps to specific behaviors is one of the most important concepts in effective automated test design.
Struan Clark is a software engineer at Innovative Defense Technologies (IDT) and is a member of the ATRT development teams.