The nature of the work that my company does (building real-time petroleum drilling data acquisition software) has led us to a situation where one of our teams focuses almost exclusively on delivering non-visual software (mostly services exposed over various and sundry technical mechanisms).
For a while, this team really struggled with the issue of how to adequately involve our quality assurance engineers in the verification of deliverables. The issue is really less about the availability of tools and more about a gap in skillset. Our QA engineers are not developers and were generally uncomfortable consuming the services using the very technically focused tools that are out there.
In addition to that, many of the test cases as designed required significant sets of preconditions to be met before the test cases could be executed. Our systems are extremely distributed and meeting those preconditions almost always involved starting other applications, hardware simulators, etc and configuring them to be in a certain state.
At some point, the team decided the most feasible way to go about testing was to write integration tests using the MSTest unit testing framework, expecting preconditions to be manually set up prior to kicking off the execution of the test. Sometimes the test developers coded in significant waits triggered in the middle of the tests that would allow a GUI based app to be used for one reason or another to change the state during the testing execution.
As you can imagine, this grew into a practice that was making our automated integration tests not so automated and thus not so useful. We’ve renewed our focus on automation and I thought I might share the guiding principles we’re using in our group to help anyone else who might be struggling with this problem.
Note: what you read below is biased towards using Scrum, .Net, and MSTest.
- Align integration tests with acceptance criteria and reproducible steps (for bugs)
- There must be at least one integration test that exercises each expectation in the related user story acceptance criteria. Test all conditions represented and to the degree possible, exercise test scenarios which are implied or indirectly represented by acceptance criteria. Human language is naturally imprecise, honor the spirit of the acceptance criteria.
- Before fixing the root cause for a bug, codify the reproducible steps in the form of an integration test. This ensures that we will be aware of regressions not only for scope related to user stories but also scope related to bugs.
- Rely on machine verifiable test results to determine success or failure, not on human interpretation.
- Use the testing framework Assert calls, ExpectedException attributes, etc to verify that state after ‘acting’ in each test meets expectations. Do not rely on visual inspection of test output for verification. Tests should be represented as failing in the test execution framework if expectations are not met.
- Make tests explicit
- Test for one thing at a time. Avoid writing tests that are monolithic and test more than one idea at a time.
- Give tests meaningful names and appropriate metadata.
- Use the following format for test names: [Action]_[Conditions]_[Expected Result]:
- Example: SettingFuse_ThenSendingDetonateCommand_BlowsStuffUp
- Prevent the test from running on the build server by using an appropriately assigned TestCategory attribute
- Automate human interaction
- Use available automation libraries and resources to automate functions that humans or external actors might normally perform.
- Frameworks and tools that automate an application UI fit in this category.
- Isolate test state / Control Variability
- Do not allow the state of one test to influence the results of another test. Reliable tests need to have a deterministic starting point that minimizes variables involved in the test. Tools to ensure that you are doing this include: ClassInitialize, TestInitialize, TestCleanup, ClassCleanup, etc.
- Avoid using static class members to hold state for a test.
- Use the Arrange, Act, Assert, Tear Down pattern when creating tests. If it is specific enough, sometimes the setup and tear down will need to be done within the body of the test itself.
- Keep tests DRY
- Factor out repeated code into test utility classes that can be reused.
- Be aware of the scope of reuse, and put the code in the appropriate place.
- Ex: If the code is reusable across test classes, include it in an external utility class
- Ex: If the code is reusable only within one set of tests, include it in a function within the test class.
- Maintenance is just as important as creation
- When tests are reported / discovered as broken, resolve the broken test immediately by fixing the failed test, NOT by commenting it out.
This is a very good write-up indicating the key points for automating integrated tests.Thanks for sharing your experience.