AWS Custom Resources appeared on the scene around the beginning of the year and appeared to be the perfect remedy for the lack of comms going in and out of a CloudFormation stack. In a nutshell they fit nicely into your template as a snippet of JSON, allow you to pass variables in and out of your stack to 3rd parties and give you the ability to kick off external scripts during your create, update and delete stack operations.
The big win is that all the stuff that you’d otherwise wrap up in your deploy scripts as additional calls becomes a single atomic “create-stack” operation. All that logic for rolling back/updating/deleting is handled by CloudFormation meaning you don’t have to write all that logic yourself in bash or whatever you call the AWS CLI with.
How I learned to stop worrying and love bash scripting
It’s 3am and PagerDuty is waking you up. You just want to re-deploy an app because that’s the quickest way to get things going again and you like sleep. But wait – it turns out this deploy relies on a version of ruby you don’t have, the bundle won’t install because nokogiri is having problems and you wonder if gardening might have been a more rewarding career.
Bash scripting provides a simple way to get things done and avoids many of the above dramas. However, we hear that bash scripts are ok for really short things that can be tested manually and anything else is better left to a “Real Programming Language™” (whether ruby qualifies is left as an exercise for the reader).
bash-spec-2 to the rescue. With this nifty little testing framework you can TDD your way to a set of comprehensible, documented functions. Continue reading →
One of the major problems with the term big data, is that “big” is relative. If you’re at a conference or reading articles about big data, you never get informed how much data you need to be dealing with in order to consider using the tools they are talking about. You’re probably getting all excited about using new tools, buzzwords flying everywhere and you’re thinking “I should learn all these things, my data is huge!”. Here’s a tip, it’s probably not that big.
Tools like hadoop, and EMR which makes hadoop easier, came about because scaling vertically is expensive and scaling horizontally is was hard. So they sought a way to make the latter easier. The problem with making scaling easier is that people don’t try to optimise first. That can be fine if you’ve got more money than time, you always have to make a trade off in those situations. If it takes you a month to optimise something and you’re strapped for time, just throw another 10 servers at it, things will be fine right? Well maybe for 6 months. Continue reading →
In October 2014 we ran our 17th Hack Day event at REA, this time focusing on social issues and hacking on outcomes to benefit the wider community. We partnered primarily with 4 charitable organisations:
In addition, participants were encouraged to collaborate with other community groups or charities on a project of their choice. 10 teams were formed in total with a focus on delivering real value to people in need.
No doubt that the term Devops is probably one of the most worn-out concepts in IT these days, maybe only slightly overtaken by the emerging buzzword of the moment: docker, docker, docker. But from the vast area and topics that Devops can cover, the one that attracts me more lately is how to build the Operations culture in an organization: bringing down the walls, sense of ownership, capability to operate a service/system, etc.
In REA we are quite proud of the Devops culture in the organization and we work really hard towards creating highly autonomous cross-functional teams that can operate efficiently end to end. This includes the ownership and operation of the services and products the team builds and maintains. This vision of Operations as a capability/responsibility in a team, rather than a separate team or static role of the engineers, has been providing really good results since we started championing it a few years ago, therefore the willingness to commit to it. There is probably not a single way to boost this capability. Sometimes we achieve it by embedding an Ops specialist in the team, whose mission and passion is to enable and train the rest of the team, but which in general means that we are committed to invest in raising the technical capability and awareness of all our engineers and providing them with the level of access to our systems/data/services to do their job in an efficient manner.
In this post I am going to focus in one of the ways that we are trying to build/improve that Operations capability across the company based on running a set of community driven activities that we have called the Ops Dojo. The idea behind the concept is not new for REA, we already had several ad-hoc training sessions, plenty of brownbags and a strong culture of sharing what we learn across the organization. The Ops Dojo is just an initiative to have even more of that and to build a community that follows it up and makes it happen regularly. The Ops Dojo is just one form of the guild initiatives which are on the rise at REA as a mechanism to share knowledge and passion and today covers such diverse topics as Security, Public speaking, Delivery Engineering, Agile, Happiness, etc… Continue reading →
Over the last year at realestate.com.au (REA), I worked on two integration projects that involved synchronising data between large, third party applications. We implemented the synchronisation functionality using microservices. Our team, along with many others at REA, chose to use a microservice architecture to avoid the problems associated with the “tightly coupled monolith” anti-pattern, and make services that are easy to maintain, reuse and even rewrite.
Our design used microservices in 3 different roles:
Stable interfaces – in front of each application we put a service that exposed a RESTful API for the underlying domain objects. This minimised the amount of coupling between the internals of the application and the other services.
Event feeds – each “change” to the domain objects that we cared about within the third party applications was exposed by an event feed service.
Synchronisers – the “sync” services ran at regular intervals, reading from the event feeds, then using the stable interfaces to make the appropriate data updates.