Skip to content

Implementing an Automation Culture: Prevention over Detection

By Elfriede Dustin—Many information technology companies have realized that their software technology research and development (R&D) efforts also need to include software testing.  Companies on the leading edge of software development such as Facebook, Google or Innovative Defense Technologies (IDT), the company I work for, recognize the need for integrated automated testing approaches.

Facebook is “safely updated with hundreds of changes including bug fixes, new features, and product improvements. Given hundreds of engineers, thousands of changes every week and hundreds of millions of users worldwide…”[i]  Facebook relies on their automated testing program that includes unit and water (GUI) testing as part of their release efforts.  In another example, Google “uses a product team that produces internal and open source productivity tools that are consumed by all walks of engineers across the company. They build and maintain code analyzers, IDEs, test case management systems, automated testing tools, build systems, source control systems, code review schedulers, bug databases….The idea is to make the tools that make engineers more productive. Tools are a very large part of the strategic goal of prevention over detection.”[ii]

Companies that develop automated solutions enable software engineers to be more productive and allow for higher quality products to be released in a shortened timeframe.  At IDT we have developed an automation culture based on Automated Test and ReTest (ATRT), our automated software testing solution platform.  Automated testing processes and products enable engineers to more readily embrace an automation culture and benefit from the prevention over detection approach to development.

Our experience has shown that quality software can’t be released if developers don’t have an effective automated testing program in place. At IDT we are submerged and surrounded by automation and use the continuous integration approach to software development and testing.

The development environment that makes this continuous integration possible is ideally a virtualized environment combined with both regular workstations and laptop computers networked together, as seen in the following illustration.

The steps of this typical continuous integration example are:

  1. Updated code is checked into a virtualized version control repository utilizing a source control tool, such as Subversion.  From here, other developers can check out both updated code off of the ‘trunk’ or code from specific branches to support different builds.
  2. Upon code check-in, the Hudson or Jenkins Continuous Build virtual server is triggered to start a complete build/check/test/report cycle.  Hudson will perform the following tasks:
    1. Update the latest code from SVN
    2. Compile the code and check for compile errors
    3. Link the code and check for any link errors
    4. Perform source code style checks and copyright checks
    5. Start a series of automated regression tests (unit tests):
      1. Unit regression tests will check several key use case tests to ensure that the code that was updated has not adversely affected the existing use cases
      2. Unit regression tests will compare the regressed results with known good results and report out any differences
  3. Automated smoke tests and automated functional tests: Automated functional regression testing utilizes tools such as ATRT: Test Manager on another virtualized node to perform tests as an end user would be expected to do (i.e. through a GUI interface). The test starts with a smoke test, if it passes; the full automated regression test is run. Each test analyzes hundreds of system level requirements against all of the tactical data.  Each requirement may itself be verified hundreds to thousands of times. The ability to verify and re-verify the same requirement multiple times using various data sets gives a better level of fidelity in the outcome of final results.
    1. Note: As part of the test automation effort many tasks can be automated:
      1.  Test environment setup and tear-down
      2.  Test data generation, as applicable
      3.  Results reporting, which includes defect reporting (most fields can be populated, but  manual analysis is required to confirm the originality and applicability of a defect)
      4.  Requirements traceability and maintenance
  4. The testing results are then reported back to the Hudson server. Upon completion of successful internal and external regression testing, the Hudson server continues to build an installer package that will be available to the end user at fielded locations.  Additionally, key statistics are gathered on the entire process and saved for later retrieval.

Finally, Hudson or Jenkins will provide the developer with reports on the entire sequence of testing.  The developer can then use the results of the testing to make appropriate code changes.

Return on Investment and Reduction of Total Ownership Cost

Implementing an automation culture that includes using automated testing tools and solutions in a continuous integration/CI environment, results in the ability to reduce the time and effort required to complete test execution and data analysis.  Automated solutions have also demonstrated the ability to increase the thoroughness of system testing.  An increase in software testing thoroughness equates to a reduction of defects found in the field and reduced total ownership cost.

An automation culture will also enable much earlier identification of integration and interoperability characteristics of tactical software products that must interact to complete complex mission threads.  Identification of software specific integration characteristics in tactical products in stride with software development cycles will enable quicker recognition of issues when it is still much cheaper to fix the bug.

Many tasks make up the software development life cycle and implementing an automation culture throughout will make your team more productive and allow more  time to focus on the things that matter: producing high quality new features.

Elfriede Dustin is a Technical Director at Innovative Defense Technologies (IDT) and one of the primary engineers involved in the development of ATRT: Test Manager.  She wrote this article in February 2014 and it was initially published in a similar format by the QAI Global Institute for their QUEST 2014 Quality Engineered Software and Testing Conference. 


[i] (2011). Push: Tech Talk. Facebook. Retrieved from https://www.facebook.com/video/video.php?v=10100259101684977&oid=9445547199&comments

[ii] Whittaker, J. (2011). “How Google Tests Software – Part One.” Google Testing Blog. Retrieved from http://googletesting.blogspot.com/2011/01/how-google-tests-software.html