At REA we happily use a variety of programming languages. Teams are given the freedom to choose a fitting language for a given project. Mostly this ends up being one of Ruby, Java, or Scala. However, there are some languages that we as developers and ops people get excited about, but the viability as a mainstream REA language hasn't yet been established.
For me personally, Haskell is what I code on the weekends and I've been looking for a way to shoehorn it into my regular work 😉
Recently I learnt that REA does in fact have some Haskell in prod. Who owns it? What does it do? No one will ever know. However as the story goes, it was automating a manual task and as such it was simply a value add that became a useful tool, and avoided questions like 'just what is a monad anyway!?'.
Jim Gaylard and I, Haskell acolytes, attempted something similar on hack day.
Only 6 months has passed since our inaugural DevOps Girls bootcamp and on Saturday 12th of August 2017 we had the pleasure of running the second edition. For those new to the bootcamp: it is a free event for women interested in learning more about devops, run by a community of passionate volunteers.
The idea started with a realisation that unless we are proactive about diversity in tech and we make meaningful contributions, it’s going to be hard to move the needle on this. The aim of the event is not only to train women but also to create a community of like-minded people who can provide support to each other.
Our first bootcamp went extremely well. From the moment it finished all the organisers and collaborators, apart from being exhausted, were already thinking: when are we doing this again? And of course, we did. We decided to run another introduction to AWS, iterating on the lessons that we learned in the first edition. Continue reading
Here in Consumer Insights we have been operating Big Data processing jobs using Apache Spark for more than 2 years. Spark empowers our daily batch jobs which extract insights from consumer behaviors from tens of millions of users who visit our sites. This blog covers our usage of Spark and aims to provide some useful insights for optimizing Spark applications based on our experience.
Recently we launched a recommendation engine, which was built using AWS Serverless technology. The journey of implementing this solution turned out to be an interesting one on a number of levels. Since its deployment into production, we thought it would be a good idea to share some of our lessons.
Bucket of Data
Essentially the system transforms a very large dataset into smaller ones that are used to create audiences or data segments which are used for hyper targeted EDMs.
To get from the initial state to the final state, the data is transformed over several stages using 8 Lambdas. Continue reading
Deploying a high traffic website with zero downtime is a challenge – there’s a natural tradeoff between:
- Performance and cacheability.
- Getting updates versions of the application live.
The approach you use to manage your static assets plays a big role in this.
This post explains how we dealt with the challenges in our move from the data centre to a multi region highly available cloud-based architecture.
A Journey into Extensible Effects in Scala
This article is an introduction to using the Scala
Eff library, which is an implementation of Extensible Effects. This library is now under the Typelevel umbrella, which means it integrates well with popular functional programming libraries in Scala like Cats and Monix. I will not touch on the theoretical side of the concept in this post. Instead, I will be using code snippets to describe how you would introduce it to an existing Scala code base. This should hopefully improve extensibility and maintainability of the code. As part of this, I will demonstrate how to build a purely functional program in Scala using concepts such as