There’s a pattern that I keep recommending to teams over and over again, because it helps separate concerns around I/O; sending and receiving things over a network, interacting with AWS, saving and loading things from data stores. It’s an old idea; you’ll find it filed under “Decorator” in your Gang of Four book.
For our purposes here, this is a compositional pattern that allows us to stack onion-rings of concern around an I/O boundary; strings on the wire, transfer protocols (eg HTTP) , formats (eg JSON), business concepts. There is no code in existence that should be concerned about more than one of these at once; they can be structured as stackable layers, and other modules can pick which level of detail they wish to talk to.
Developers have – with the advent of DevOps – been working more and more in Operations and Infrastructure. Testers however, have not.
Thus far, the testing personnel have been mostly or wholly assigned to application testing work. As SOFTWARE testers, we have only worked on software – and then mostly only on application software.
I pose the questions: What about infrastructure as code? Should that not be explicitly tested?
And: if Testers are meant to be testing the system, why then have they not explicitly been testing the whole system, infrastructure included?
I am going to make a case here for including QA in Operations and Infrastructure, by clarifying how I see the QA fitting in the DevOps world. Continue reading
Hark! What is this Jest you speak of?
Think of it as several layers of improvement stuck on top of Jasmine. Some of the neat features Jest provides are:
- Automatically finds tests to run in your project
- Has in built support for fake DOM APIs, such as jsdom, that you can run from the command line
- You can test asynchronous code more easily using inbuilt mocked timer functions
- Tests are run in parallel so they go faster! Vroom vroom.
But the big drawcard is Jest’s automatic mocking of CommonJS dependencies using the require() function. Instead of specifying all the dependencies you want mocked, you do the opposite. For the subject under test, you just use jest.dontMock().
On a Friday a few weeks ago, we deployed a set of minor changes to one of our Rails apps. That evening, our servers started alerting on memory usage (> 95%). Our initial attempts to remedy this situation by reducing the Puma worker count on each EC2 instance didn’t help, and the memory usage remained uncomfortably high through the weekend. On Monday, we popped open NewRelic and had a look in the Ruby VM section. Indeed, both the Ruby heap and memory usage of each web worker process had begun a fairly sharp climb when we deployed on Friday, after being totally flat previously:
However, over the same period of time, the number of objects allocated in each request remained fairly static:
If our requests aren’t creating more objects, but there are more and more objects in memory over time, some of them must be escaping garbage collection somehow. Continue reading
Previously at REA we’d had very special tools for Load and Performance testing that were quite expensive, very richly featured but completely disconnected from our every day development tools. The main outcome of this was that we ended up with a couple of engineers who were quite good at L & P testing with our enterprise tools while the majority of engineers found the barriers too great. We have moved to an approach which is far more inclusive and utilises many of the tools our engineers are working with on a daily basis. I’ll talk about how we did this for the most recent project I worked on.
So, you’re writing microservices! You’re feeling pretty smug, because microservices are all the rage. All the cool kids are doing it. You’re breaking up your sprawling monoliths into small services that Do One Thing Well. You’re even using consumer driven contract testing to ensure that all your services are compatible.
Then… you discover that your consumer’s requirements have changed, and you need to make a change to your provider. You coordinate the consumer and provider codebases so that the contract tests still pass, and then, because you’re Deploying Early and Often, you release the provider. Immediately, your monitoring system goes red – the production consumer still expects the provider’s old interface. You realise that when you make a change, you need to deploy your consumer and your provider together. You think to yourself, screw this microservices thing, if I have to deploy them together anyway, they may as well go in the one codebase.