So, you’re writing microservices! You’re feeling pretty smug, because microservices are all the rage. All the cool kids are doing it. You’re breaking up your sprawling monoliths into small services that Do One Thing Well. You’re even using consumer driven contract testing to ensure that all your services are compatible.
Then… you discover that your consumer’s requirements have changed, and you need to make a change to your provider. You coordinate the consumer and provider codebases so that the contract tests still pass, and then, because you’re Deploying Early and Often, you release the provider. Immediately, your monitoring system goes red – the production consumer still expects the provider’s old interface. You realise that when you make a change, you need to deploy your consumer and your provider together. You think to yourself, screw this microservices thing, if I have to deploy them together anyway, they may as well go in the one codebase.
This is one of the challenges you will face when writing microservices. The aim is to decouple your codebases from one another, but occasionally and unavoidably, things will need to change in a breaking way. What can you do then? You could try versioning your API, but anyone who has tried maintaining more than one version of an API at a time won’t be particularly enthusiastic about this option.
Another option is to try to make all your changes backwards compatible until the dependent systems have updated their code, but how can you be sure that the head version of your service is backwards compatible with the other services in its context without actually deploying to production? One option that we tried at realestate.com.au prior to using contract testing was to give each service its own integration test environment, called a “certification environment”, with a copy of all the production services in its context, and the latest version of the service under test. The integration tests were then run in each certification environment …. turns out that if you have n services in your ecosystem, you need n2 instances to run these tests, and the amount of effort to maintain all those environments and all those slightly differing test suites, as well as a copy of their “old world” monolith dependencies… well, let’s just say it Did Not Scale.
But remember that we (and the hypothetical you in this story) were using consumer driven contracts? Those contracts are the key to entering the Matrix, where you will discover a whole new way of looking at the world…
When you use contracts, you generally test the head versions of the consumer and provider codebases against each other, so you know whether or not they are compatible. The production versions of the consumer and provider are probably also compatible, because you tested them against each other back when they were both at head. But if you want to be able to deploy your consumer and provider independently of each other there are two other things you need to know. Is the head version of your consumer compatible with the production version of your provider? And is the head version of your provider compatible with the production version of your consumer?
If you put those 4 things together, you get a matrix that looks like this.
|Consumer Head||Consumer Prod|
|Provider Head||Contract tests||Unknown!!!|
|Provider Prod||Unknown!!!||Already tested|
Let’s get specific about the contract testing. Let’s assume you’re using Pact, a consumer driven contract testing library developed at realestate.com.au. When you’re doing contact testing with Pact, the process goes like this:
- The CI build in the consumer project runs tests against a mock provider, provided by the Pact library.
- The mock provider records the requests and the expected responses into a JSON pact file.
- The consumer CI build publishes pact file to a known URL or copies it into the provider codebase.
- The provider CI build retrieves the pact from that known location, and runs the “pact verification” task against the provider codebase.
- The verification task replays each request in the pact against the provider, and compares the actual responses with the expected responses.
- If they match, we know the two projects are compatible.
Once a pact file has been generated by a consumer, all we need to determine if version X of the consumer is compatible with version Y of the provider, is the pact from version X of the consumer, and the codebase from version Y of the provider. If, instead of just verifying the latest version of a pact against the provider, we also verify the production version of the pact, we will know that our provider is backwards compatible, and can be deployed at any time. Conversely, if we check out the production version of the provider codebase, and verify the latest pact against it, we will know that our consumer is backwards compatible, and can be deployed at any time.
Let’s enter the Pact Matrix.
|Consumer Head (pact)||Consumer Prod (pact)|
|Provider Head (codebase)||Contract tests||Contract tests
(ensure provider is backwards compatible)
|Provider Prod (codebase)||Contract tests
(ensure consumer is backwards compatible)
By testing the Pact Matrix, you can be confident to deploy any service at any time, because your standalone CI tests have told you whether or not you are backwards compatible – no “certification environment” needed. And when there are multiple services in a context, this approach scales linearly, not exponentially like the certification environment approach.
At realestate.com.au, we use a tool to help us share pacts between our consumer and provider projects that also gives us the ability to tag the production version of a pact for use in the Pact Matrix. But that is a topic for another post! Keep an eye out on the tech blog for my upcoming post on the Pact Broker.