From Testing Microservices with Mountebank by Brandon Byars
This article includes a basic refresher on continuous delivery, test strategy for continuous delivery and microservices, and where service virtualization applies within a broader testing strategy.
A sysadmin, a DBA, and a developer walk into a bar. The sysadmin orders a light lager to maximize uptime, the DBA orders a 30-year aged single malt to avoid undue adulteration, and the developer orders a Pan Galactic Gargle Blaster (The Hitchhikers Guide to the Galaxy, Douglas Adams describes the Pan Galactic Gargle Blaster as the alcoholic equivalent of a mugging — expensive and bad for the head) because it hasn’t been invented yet. An hour later, the DBA has gone home already, the developer has moved on to a more modern bar, and the slightly wobbly and heavily overutilized sysadmin is holding down the fort, while also holding a lager in one hand, a single malt in another hand, and a Pan Galactic in another hand.
Traditional siloed organizational structures force a complicated dance to get anything done. It’s no surprise that, in large enterprises, IT and the business rarely have a healthy relationship. Historically, the common approach to improving the situation was to add more process discipline, which further complicated the dance, making it harder to release code into production (and by consequence, value to customers). Having increasingly well-defined handoffs between a developer, a DBA, and a sysadmin exemplifies process discipline. Every time you fill out a database schema change request form or an operational handoff document, you’ve seen process discipline in action.
Continuous delivery changes the equation by emphasizing engineering discipline over process discipline. It’s about automating the steps required to build the confidence which allows the business to release new code on demand. Although the engineering discipline encompasses a wide spectrum of practices, testing plays a central role. In this article, we’ll look at a sample test strategy for a microservices world, and show where service virtualization does and doesn’t fit.
A continuous delivery refresher
Jez Humble and Dave Farley wrote Continuous Delivery to capture the key practices they saw enabling the rapid delivery of software. The traditional process discipline of centralized release management and toll gates increases congestion and slow delivery. The emphasis is on safety, providing additional checks to increase confidence that the software being delivered will work.
In contrast, continuous delivery (CD) focuses on automation, emphasizing safety, speed, and sustainability of delivering software. It requires the code to be in a deployable state at all times, forcing us to abandon the ideas of “dev complete,” “feature complete,” and “hardening iterations.” Those concepts are hangovers from the world of yesteryear, in which we papered over a lack of engineering discipline by adding additional layers of process.
A GLOSSARY OF TERMS SURROUNDING CONTINUOUS DELIVERY
Here are several terms that are important for this article:
- Continuous integration: Although continuous integration is (CI) often confused with running an automated build and every commit through a tool like Jenkins, it’s actually the practice of ensuring that your code is merged and works with everyone else’s on a continual basis (at least once a day).
- Continuous delivery: The set of software development practices that ensures code is always releasable. The full spectrum of CD practices includes developer-facing techniques like feature toggles, which provide a way of hiding code which is still a work-in-progress, to production-facing approaches like monitoring and canary testing, which scales up a release to a customer base over time. In between comes testing.
- Deployment pipeline: The path code takes from the time it’s committed to the time it reaches production.
- Continuous deployment: An advanced type of continuous delivery that removes all manual interventions from the deployment pipeline.
In CD, every commit of the code either fails the build or can be released to production. You don’t need to decide up-front which commit represents the “release version.” Although it’s common, that approach encourages sloppy engineering practices. It enables us to commit code that can’t be released to production, with the expectation that we’ll fix it later. That attitude requires IT to own the timing of software delivery, taking control out of the hands of the business and the product manager.
The core organizing concept that makes CD possible is the deployment pipeline. It represents the value stream of code’s journey from commit to production, and it’s often directly represented in continuous integration (CI) tools.
The path which code takes on its way to providing value to users varies from organization to organization, and even between teams within the same organization. Much of it’s defined by how you decide to test your application.
Test strategy for continuous delivery with microservices
Testing in a very large-scale distributed setting is a major challenge.
— Werner Vogels Amazon CTO
A common approach to visualizing test strategy comes in the form of a pyramid. The visual works because it acknowledges the fact that confidence comes from testing at multiple layers, and that there’s value in pushing as much of the testing into the lower levels as possible because they’re both easier to maintain and faster to run. As we move to higher levels, the tests become harder to write, to maintain, and to troubleshoot when they break. They’re also more comprehensive and often better at catching difficult bugs. Each team needs to customize a test pyramid to their needs, but we can think of a strawman for microservices that looks like this (for more info, here’s Toby Clemson’s description of the types of testing for microservices at martinfowler.com/articles/microservice-testing/).
People have argued endlessly over what makes a unit test different from higher level tests, but for the purposes of this diagram, the key difference is that you should be able to run a unit test without deploying your service into a runtime. That makes them in-process and independent of anything from the environment. Though there’s some different terminology out there, I’ve used the term service test to describe black-box tests that validate your service’s behavior over the wire. These require a deployment, but we use service virtualization to maintain isolation from our runtime dependencies. This layer allows you to do out-of-process, black box testing, while maintaining determinism. Service virtualization enables us to remove non-determinism from our tests by allowing each test to control the environment it runs in.
You should be able to test the bulk of the behavior of your service through a combination of unit tests and service tests. They let you know that your service behaves correctly assuming certain responses from its dependencies, but they don’t guarantee those stubbed responses are appropriate. Contract tests give us validation that there haven’t been breaking contract-level changes. Whereas service tests say, in effect, that if it gets these responses from its dependencies, then the service behaves correctly; contract tests validate that it gets those responses. Good contract tests avoid deep behavioral testing of the dependencies (they should be tested independently), but give you confidence in your stubs.
I’ve shown exploratory testing as part of the test pyramid because most organizations find some value from manual testing. Good exploratory testers “follow their nose” to find gaps in our automated test suite. Such tests can be integrated or rely on service virtualization to test out certain edge cases.
Some types of testing don’t fit as well in the test pyramid metaphor. Cross-functional requirements like security, performance, and failover for availability often require specialized testing, and are less about the behavior of the system than they are about its resiliency. Performance testing is an area where service virtualization shines, as it allows you to replicate the performance of your dependencies without requiring a fully integrated, production-like environment to run in.
Finally, we should never forget that error prevention is only a piece of test strategy. The rapid release cycles of microservices encourage us to invest heavily in error detection and remediation as well as prevention, as they contribute to our overall confidence of releasing software. Companies which have used microservices effectively generally stage their releases, such that only a small percentage of users can see the new release at first. Robust monitoring detects whether the users experience any problem and rolling back’s as easy as switching those users to the code everyone else’s using. If no problems are detected, the release system switches more and more users to the new code over time until 100% of users are using the release, at which time the previous release can be removed (this is called canary testing; read more about it at martinfowler.com/bliki/CanaryRelease.html). Advanced monitoring allows you to detect errors before your users do. Your test strategy is a key component of continuous delivery, and engineering discipline significantly increases the scope of automation.
Mapping your test strategy to a deployment pipeline
Whatever your particular test pyramid looks like, there’s generally a pretty straightforward mapping of it to your deployment pipeline.
I like to think of boundary conditions moving from one stage to the next. In Figure 5, I’ve shown the following boundaries:
- Boundary of deployment, representing the first time we’ve deployed the application (or service). All tests to the left are run in-process; all tests to the right are run out-of-process, and implicitly test the deployment process itself as well as the application.
- Boundary of determinism, representing the first time we’ve integrated our application into other applications. Tests beyond this boundary may fail due to environmental conditions. Tests before this boundary should only fail for reasons entirely within the application team’s control.
- Boundary of automation, representing where we switch to manual, exploratory testing (note that the deployment is still automated, but the trigger to deploy requires a human pressing a button). Some companies, for some products, have managed to eliminate this boundary altogether, automatically releasing code to production without any manual verifications. This is an advanced form of continuous delivery called continuous deployment, and it’s clearly inappropriate in some environments. The software that helps keep an airplane in the air requires a much higher degree of confidence than your favorite social media platform.
- Boundary of value, at which point real users have access to the new software.
That’s quite a bit to take in, but you should have a good idea of how testing with service virtualization fits into CI.
For more on using Mountebank to test your microservices, read the free first chapter of Testing Microservices with Mountebank and see this Slideshare Presentation.
About the author:
Brandon Byars is a principal consultant at ThoughtWorks with long-standing experience in SOA and microservices. He is the author and chief maintainer of Mountebank and has helped multiple companies use it for testing a variety of systems.
Originally published at freecontent.manning.com.