Maximize the use of your Automated Software Tests with Continuous Integration
Continuous integration (CI) is starting to become an industry adopted software engineering best practice in which any change to the code or environment is tested and reported on in a timely and repeatable manner. In most cases this involves nightly software builds and nightly automated software test runs to allow for quick look reporting on any newly introduced issues. Virtualized test environments play a major role in this best practice.
The typical development environment that makes this possible is one of a virtualized environment combined with both regular workstations and laptop computers networked together. Here is an example of such a CI implementation:
Figure 1: Continuous Integration Environment Example
1. Developers first review the system level requirements and create a set of automated tests. Code is locally edited/compiled/linked and then checked in to a virtualized version control repository, such as SVN. From here, other developers can check out both updated code off of the ‘trunk’ or from code from specific branches to support a different build.
2. Upon code check-in, a continuous build server, such as the Hudson Continuous Build virtual server, is triggered to start a complete build/check/test/report cycle. Hudson will perform the following tasks:
- Update the latest code from SVN
- Compile the code and check for compile errors
- Link the code and check for any link errors
- Perform source code style checks and copyright checks
- Start a series of both internal and external regression tests
Internal regression tests will execute automated tests to verify key use case tests and results are as expected; these tests will also ensure that code that was updated has not adversely affected the existing functionality.
3. External regression testing can then utilize any automated testing capability on another virtualized node to perform tests as an end user would (i.e. through a GUI interface). Each test can then analyze hundreds of system level requirements. Each requirement may itself be verified hundreds to thousands of times. External regression testing again compares its results against a known set of results.
4. The internal and external testing results are then reported back to the Hudson server. Upon completion of successful internal and external regression testing, the Hudson server continues to build an installer package that will be available to the end user at fielded locations. Additionally, key statistics are gathered on the entire process and saved for later retrieval.
5. Finally, Hudson provides the developer with reports on the entire sequence of testing. The developer can then use the results of the testing to make appropriate code changes.
If you have automated tests already prepared, make sure you make best use of them by running them as part of your continuous integration. For more detail about virtualization and automated testing, see our article, Efficiencies of Virtualization in Test and Evaluation, which was published in the 25th anniversary issue of Crosstalk. For help with your automated testing effort, contact IDT.