A significant component of the software testing lifecycle involves the generation and targeted use of test data. Good test data is useful in enhancing testability. Accurate data content can improve test functionality and reduce maintenance efforts. Conversely, if test data is poor, functional testing and performance can suffer.

A data dictionary and design documentation can be useful in providing sample data for the test effort. Data dictionaries offer data element names, data structures and usage rules—among other useful information.  Database schemas can reveal how the application interacts with data and what relationship exists between data elements.

The more of an application’s functional and nonfunctional requirements that are tested, the better. However, due to the vast number of possibilities, testing all of the variations is nearly impossible. Generating useful test data manually is tedious, time-consuming and error-prone. Test data generation tools can work in conjunction with an Automated Software Testing effort to lighten the burden. In addition, various test design techniques, like data flow coverage, can help narrow down the data to the most effective portions.

Test data requirements are important in ensuring that each system-level requirement is tested and verified.  There are a variety of considerations for the test team when it comes to data that is used, as listed below. The use of automation simplifies this effort.

Depth: The test team must determine what volume of database records is needed to support the test effort. Initial lifecycle tests can be done in smaller batches, but the depth of the database should increase as needed. Testing with larger databases is more important when evaluating for performance or volume system issues.

Breadth: The test team must utilize a well-designed test that will incorporate a variety of data values and will therefore test a wider variety of scenarios. All data categories need to be taken into account.

Scope: The test team needs to investigate the relevance of the data values. The range of the test data supports the completeness and accuracy of the test.

Execution: The test team needs to maintain data integrity while performing tests. Data needs to be segregated, modified and then returned to the database in its initial state. Assigning different teams to test different functions helps in this effort.

Conditions: The test team should create data sets that reflect different conditions within the application under test (AUT). Preparing data in advance for various conditions simplifies testing activities later.

The test team needs to obtain and modify any test databases necessary to exercise and test software applications and develop test environments. Having actual customer data available during testing can be a useful reality check.

An effective software testing strategy includes the careful preparation of test data. Automated testing products, like IDT’s ATRT: Test Manager, can help.

Some information taken from:  Dustin, Elfriede, Thom Garrett, and Bernie Gauf. Implementing Automated Software Testing: How to save Time and Lower Costs While Raising Quality. Upper Saddle River, NJ: Addison-Wesley, 2009. This book was authored by three current IDT employees and is a comprehensive resource on AST.  Blog content may also reflect interviews with the authors.