Rabbits, they are small, the are cute, they are fluffy…. and they are terrible fighters*.
Which is why evolution has favoured the skittish amongst them.
The rabbit which was overly cautious and ran away and the first hint of danger survived and went on to produce more rabbits. But the rabbit which was more laid back and waited until it was sure it was in peril before trying to flee did not. Of course running away does have a cost, and if you spend all your time running at the slightest sound you’ll expend a lot of energy and have no time to nibble at the grass and gain more**.
So what do rabbits have to do with Auto Scaling Groups (ASGs)?
Let’s face it, writing software is hard. And frankly we humans suck at it. We need all the help we can get. Our industry has developed many tools and techniques over the years to provide “safety rails”, from the invention of the macro assembler through to sophisticated integration and automated testing frameworks. But somewhere along the way the idea of static analysis went out of favour.
I’m here to convince you that static analysis tools still have a place in modern software engineering.
As previously discussed we’re pretty keen on micro services at REA. Our delivery teams are organised around small, autonomous “squads” that get to choose pretty much any language and technology stack they wish to implement their solutions.
This inevitably leads to a fairly broad church of language use. 🙂
Previously at REA we’d had very special tools for Load and Performance testing that were quite expensive, very richly featured but completely disconnected from our every day development tools. The main outcome of this was that we ended up with a couple of engineers who were quite good at L & P testing with our enterprise tools while the majority of engineers found the barriers too great. We have moved to an approach which is far more inclusive and utilises many of the tools our engineers are working with on a daily basis. I’ll talk about how we did this for the most recent project I worked on.
Please don’t use mocks or stubs in tests. While they are seemingly ubiquitous in enterprise development, they have serious drawbacks, and typically mask easily fixable deficiencies in the underlying code.
Even their most ardent defenders concede that mocks and stubs have flaws. These include:
Dependence on fragile implementation details
Mocks and stubs require intimate knowledge of how code interacts with other modules. Even if the implementation is correctly refactored without altering public contracts, these tests will tend to break, and draw your attention away from more productive tasks.
Testing incidental properties with no bearing on correctness
What is the point of this code? This is an essential question to ask, in order to understand it. Tests have a story to tell here, and mocks invariably tell the wrong one. Is the point of makeCoffee() that we made a coffee, or that we opened the fridge to get the milk? When we payShopkeeper(), do we care that we completed a transaction, or that we rummaged though our wallet for change? When mocking tests fail, the poor maintainer is left to reconstruct the real intent from a trail of indirect clues and anecdotes.
Web of lies
It is good practice to write data structures that are correct-by-construction; any constructor or sequence of method calls is guaranteed to leave the data in a meaningful state. Stubs introduce test-only fictions that are stripped of any of the safety latches and guarantees that may have been built in; they introduce fresh sources of error that are not present in the codebase. There is no value in detecting any failure that arises in such a way.
As time goes on, lies beget more lies. It is not unusual for a stubbed input in one place to result in another here, and another there; the fiction leaks and spreads into some kind of evil facsimile of the original code, but with more bulk, complexity and defects.
Validation to encode our business rules. It’s a different way of thinking. It uses the Scalaz library.