8 Guiding principles for creating good automated integration tests

The nature of the work that my company does (building real-time petroleum drilling data acquisition software)  has led us to a situation where one of our teams focuses almost exclusively on delivering non-visual software (mostly services exposed over various and sundry technical mechanisms).

For a while, this team really struggled with the issue of how to adequately involve our quality assurance engineers in the verification of deliverables.  The issue is really less about the availability of tools and more about a gap in skillset.  Our QA engineers are not developers and were generally uncomfortable consuming the services using the very technically focused tools that are out there.

In addition to that, many of the test cases as designed required significant sets of preconditions to be met before the test cases could be executed.  Our systems are extremely distributed and meeting those preconditions almost always involved starting other applications, hardware simulators, etc and configuring them to be in a certain state.

At some point, the team decided the most feasible way to go about testing was to write integration tests using the MSTest unit testing framework, expecting preconditions to be manually set up prior to kicking off the execution of the test.  Sometimes the test developers coded in significant waits triggered in the middle of the tests that would allow a GUI based app to be used for one reason or another to change the state during the testing execution.

As you can imagine, this grew into a practice that was making our automated integration tests not so automated and thus not so useful.  We’ve renewed our focus on automation and I thought I might share the guiding principles we’re using in our group to help anyone else who might be struggling with this problem.

Note: what you read below is biased towards using Scrum, .Net, and MSTest.

  1. Align integration tests with acceptance criteria and reproducible steps (for bugs)
    1. There must be at least one integration test that exercises each expectation in the related user story acceptance criteria. Test all conditions represented and to the degree possible, exercise test scenarios which are implied or indirectly represented by acceptance criteria. Human language is naturally imprecise, honor the spirit of the acceptance criteria.
    2. Before fixing the root cause for a bug, codify the reproducible steps in the form of an integration test. This ensures that we will be aware of regressions not only for scope related to user stories but also scope related to bugs.
  2. Rely on machine verifiable test results to determine success or failure, not on human interpretation.
    1. Use the testing framework Assert calls, ExpectedException attributes, etc to verify that state after ‘acting’ in each test meets expectations. Do not rely on visual inspection of test output for verification. Tests should be represented as failing in the test execution framework if expectations are not met.
  3. Make tests explicit
    1. Test for one thing at a time. Avoid writing tests that are monolithic and test more than one idea at a time.
  4. Give tests meaningful names and appropriate metadata.
    1. Use the following format for test names: [Action]_[Conditions]_[Expected Result]:
    2. Example: SettingFuse_ThenSendingDetonateCommand_BlowsStuffUp
    3. Prevent the test from running on the build server by using an appropriately assigned TestCategory attribute
  5. Automate human interaction
    1. Use available automation libraries and resources to automate functions that humans or external actors might normally perform.
    2. Frameworks and tools that automate an application UI fit in this category.
  6. Isolate test state / Control Variability
    1. Do not allow the state of one test to influence the results of another test. Reliable tests need to have a deterministic starting point that minimizes variables involved in the test. Tools to ensure that you are doing this include: ClassInitialize, TestInitialize, TestCleanup, ClassCleanup, etc.
    2. Avoid using static class members to hold state for a test.
    3. Use the Arrange, Act, Assert, Tear Down pattern when creating tests. If it is specific enough, sometimes the setup and tear down will need to be done within the body of the test itself.
  7. Keep tests DRY
    1. Factor out repeated code into test utility classes that can be reused.
    2. Be aware of the scope of reuse, and put the code in the appropriate place.
    3. Ex: If the code is reusable across test classes, include it in an external utility class
    4. Ex: If the code is reusable only within one set of tests, include it in a function within the test class.
  8. Maintenance is just as important as creation
    1. When tests are reported / discovered as broken, resolve the broken test immediately by fixing the failed test, NOT by commenting it out.

1 Comment

  1. This is a very good write-up indicating the key points for automating integrated tests.Thanks for sharing your experience.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s