The abject failure of weak typing

More discussion on Hacker News, and the Clojure, Haskell, Rust and Programming subs on Reddit.

Over the last few years of maintaining code old and new at REA, it has become abundantly clear that the neglect and misuse of type-systems has had a sharply negative impact on our codebases. Here we address the concrete causes and consequences, and propose concrete and achievable solutions.

Types at REA

REA’s codebases vary between statically typed languages such as Java and Scala on one hand, and “dynamic” languages such as Ruby, Javascript, and Perl on the other. I’ll mostly leave the dynamic ones alone — while the same arguments obviously apply, they are what they are. However, if we are paying for the conceptual overhead of a static type system, we need to be sure we are using it effectively; otherwise we are simply writing the same old Ruby play-dough but with long build-times and cumbersome syntax. Used properly, the tradeoff of static types is overwhelmingly beneficial.

What’s in a type?

A type is some logical proposition about a codebase, where an implementation is its proof. They are distinct from runtime tags, which are what people mostly mean when they talk about dynamic “types”. Terminology can vary, but I’ll stick to this usage. For example:

  • Haskell has a spectacularly rich static type system, with no runtime-accessible metadata, apart from pattern matching on tagged unions.
  • Dynamic languages such as Ruby are not really untyped, but rather unityped; statically, we know that everything inhabits a single message-receiving type. At runtime, of course, they employ tags to differentiate numbers from strings, arrays, maps, and user-defined structures.
  • Java has both a type system applied by its compiler, and a tag system at runtime that allows reflection, casting, RTTI, and various dynamic features. Neither is a compelling specimen.

Types as design

Even if the language we use doesn’t have a notion of types, you’d better believe they exist — how can we even write code without statically reasoning about it first? A maintainer must construct a model in their heads: what keys do we assume are in this map? Might this thing be null? What are the allowed values of this symbol? Can this string be empty? What messages will this object respond to?

Many dynamic-typing advocates have a curiously limited view of what a “type” is – in fact, almost any proposition we can statically make about the code can be represented as a type. Language choice notwithstanding, there is an enormous amount of information that fits into this category; by proving it up-front we can drastically reduce the number of incorrect programs that are even expressible. As the saying goes, don’t write software that isn’t broken; write software that can’t be broken.

Types are a powerful tool for clarifying thoughts, and designing correct software, arguably far more so than popular test-driven methodologies.

Types done wrong

Essentially anything that can be cleanly and obviously known about the code up-front belongs in a type. The most egregious failures here often stem from using the most common everyday concepts — strings, exceptions, primitives, maps, nulls, typecasts and so forth. Many readers of this post will, like its author, find something to feel guilty about here. Let’s take a tour:


The harm represented by nulls is hopefully widely understood by now, but bears repeating.

Any value that we know could be null cannot be directly accessed by correct software; we must either surround usages in an if-guard, or employ some kind of harmless Null Object that can hopefully respond in a sensible way. It is a bald-faced lie told by the type-system in Java, C# and Scala, and to the developer’s mental model in Ruby. If a variable claims to be a Banana, surely you can feel justified in peel()-ing it? If it is null, then it is no banana at all, but a ticking time bomb waiting to explode, potentially at an unrelated line of code far away. Well-written code cannot tolerate even the possibility.

The proliferation of duplicated defensive code at numerous locations is a further burden, which bloats both code and tests, while reducing quality.


  • Never permit nulls in code you control, and firmly regulate the contact points of systems and frameworks you don’t.
  • If a type has a sensible “empty” or default value that can fulfil the contract of your type, then initialise variables to this, or employ a Null Object.
  • Avoid mutable entities that need to be initialised piecemeal after creation. Write immutable objects that are immediately and fully initialised from constructor input.
  • If a particular variable might or might not be present, then this should be encoded in the type system using an option type, like Scala’s Option, Function Java’s Option, Java 8’s Optional, or Haskell’s Maybe. This correctly represents the uncertainty in the type system, so that any access is safe, simply by the fact that it compiled.


Exceptions are the primary error-handling mechanism employed by many widely-used languages. They are also a side-effect that makes a liar of the type system, and makes local reasoning about code far more difficult. They represent an undeclared method result smuggled through a back-channel separate from its declared return type. Furthermore, they transitively become an undeclared result of anything that calls that method, and anything that calls that, and so on. Trying to reason about the correct behaviour of code becomes very difficult, since the return type can no longer give you enough information. Exceptions kill modularity and inhibit composition.

Java awkwardly attempts to mitigate this with checked exceptions; they become a fully-fledged, type-checked part of a method signature. While this is better from a type-safety point of view, they still use an exotic second channel for returning results totally incompatible with the first, require an insufferable amount of handling code, and have far poorer tools for abstraction and reuse. Checked exceptions are widely despised by Java programmers, and frequently ignored by library authors.


  • Don’t throw exceptions in code you control, except in the most irretrievably broken circumstances.
  • When dealing with code you don’t control, catch their exceptions as soon as possible and lift the various results into your return type.  In Scala, the easiest way to do this is the Try type, which directly lifts the result into an ADT of Success(yourValue) or Failure(thrownException).
  • Exclusively encode possible function results in the return type. Don’t throw that AuthenticationException for a totally plausible and normal outcome! Here’s some alternatives:
    • When there is a main result alongside a possible failure result, use an existing Either or Validation type.
    • Define your own Algebraic Data Type (ADT) that describes the possible alternatives. For instance, in Scala or Java, this can take the form of a closed mini-class hierarchy.


Primitive values such as integers and strings are often the first tools we reach for, but are woefully unsuited to most use-cases they are press-ganged into. This is because they have an astronomical number of possible values, and most use-cases do not.


Consider this function:

def blah(b: Boolean): Boolean

A function A -> B has BA possible implementations, by the number of inhabitants in A and B. So this function has 22 = 4 possible implementations. Perhaps we could write a test case for each one.

Now consider this function:

def compare(a: Int, b: Int): Int

This one has not only 232 possible results, but an incalculable (232)264 possible implementations. Even if we are generous and only allow the inputs to be 0 or 1, we still have 2128 implementations. If the only meaningful return values are -1, 0 and 1, indicating “less than”, “equal” or “greater than”, then why would we embed them like specks of dust in a big desert of undefined behaviour?

If we encode the return value as an ADT representing the 3 possible results, as Haskell does, then we have a positively civilised 34 = 81 possible mappings from input to output. Even in such a simple case, a more precise return type pruned 340 undecillion incorrect programs from the sphere of existence.

Remember, that was an utterly trivial example. So what happens when you have complex/composite/nested data structures, exceptions and even side-effects? How many of the possibilities are even remotely meaningful in your domain? Without precise types, how many were you hoping to reach with your TDD and “100% code coverage”?


String are perhaps the most commonly used data type, due to their immense versatility; however, they are rarely appropriate. Strings consist of a sequence of characters. This is the perfect representation for unstructured free text, and nothing else. Any restriction, structure or constraint in the format of the string means you don’t have a string at all; you have a URL, a Date, a Name, an Email, a Document, an ID, a Warning, or whatever else that might tempt you to deploy this amazing swiss-army-type.

Not only does “stringly typed” code result in a catastrophic expansion in the number of expressible incorrect programs, it inevitably results in duplication, as the validation, destructuring and restructuring code is repeated in every spot where the string format is supposed to be in use.


  • If there are a finite number of possibilities, use an ADT to represent them. This internally prevents a vast number of incorrect implementations, and externally prevents a vast number of incorrect usages.
  • Use a wrapper type to encapsulate the desired structure; make it impossible to create invalid instances. This will also cull incalculable absurdities inside and outside of the function. This costs one line of code in Scala.
  • Wrapper types can sometimes normalise errant input in their constructors, seamlessly eliminating redundant or invalid states.
  • As a last resort, throw exceptions inside constructors to prevent any remaining possibility of an invalid instance.

Records vs Domain modelling

A couple of years ago, we were keen to avoid over-specific domain modelling, and took care to build our services as dealing in the domain of “records” or “attribute-maps”, rather than specifically tie our logic to Listings, Agents, Dogs, Cats, Aeroplanes or what-have-you. This was to prevent the loss of generality, and to keep the application focused on web and infrastructural concerns. So following the famous Alan Perlis line “It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures”, we decided on a unityped representation of domain data.

In an absolute sense, I can’t say whether this is a good idea or not. It’s potentially a totally valid point of view — perhaps the detail and structure of the record data is of no relevance to the code that provisions, streams, encodes, stores, secures and displays it — as long as it stays that way. In practice, however, it has left our codebase with serious flaws.

Firstly, the approach of using unityped records is totally predicated on the application not needing to know anything about their structure, beyond the obvious tree-shape. When the application suddenly needs to, say, differentiate “floorplan” from “main” images, or know if a “logo” was included, then the concept is doomed. This knowledge, completely understood at compile time, must be expressed in clumsy string-based map retrieval, faith-based typecasting and a total absence of any way to reason about the correctness, or even the intended behaviour of the code.

Either business logic has to be strictly forbidden from this unityped pipeline, or we need the types of the data to truly reflect the knowledge we have, and need, at compile time.

Secondly, even if the unityped record approach was correctly chosen, using, as we did, Map<String, Object> was a clear blunder. The verbosity and repetition are appalling; every single access, construction, iteration, de-reference has to be laboriously performed by hand at myriad locations around the codebase. Even if all we need to know is “it’s a map with certain behaviours and constraints”, then we should have encoded that in the types, and created some sort of Record class. In this case, it was also compounded by the use of Java (pre-8), which has extremely poor facilities for abstracting over collections and maps.


  • The static knowledge we have, or require about our domain objects should be captured in types.  If there is a clear case that we don’t need to differentiate between this-or-that domain object, then that should be captured in the types.
  • Quickly translate interchange formats into data structures that reflect the knowledge we require, and can only hold meaningful and valid states. There are mapping tools in Java and Scala that can safely map between JSON and typed objects.

Names are overrated

Let’s look at a function:

def findAddress(userId: String): String

What does it do? Are you sure?

Now lets look at another:

def blah(foo: UserId): Address

Which one tells you more about its purpose — the one with the businessy names, or the one with the types?

Naming has role to play, but consider what it really does. It is a mnemonic, a reference that helps you uniquely recall a concept. While this is fine as far as it goes, names are totally useless for reasoning about software. For documentation, they are as poor as comments, or Word docs. Implementing a precise type signature proves that the software does what it says on the tin. If the types in question have been carefully designed to prevent invalid states, then often there will be only be a handful of possible implementations, or even one — not a number you’ve never heard before, ending in -illion.


  • Treat your types as the only real documentation.
  • Constrain argument and return types to a named alternative, that limits possible states. This is all maintainers should need to know about your functions.
  • Wrapper types start from one line of code in Scala.

The way forward

While I’ve listed a variety of different problems, you’ll notice a lot of repetition in the proposed solutions. In fact, we can solve these problems very simply, using only a few techniques — especially if we continue our adoption of Scala in place of Java and weakly-typed languages.

Algebraic Data Types (ADTs)

ADTs are a powerful tool for us here, because they allow us to encode limited possibilities in the type system, so that invalid combinations are inexpressible in a well-typed program.

They are called “algebraic” because they are sums and products of other types. (Sums are like OR, and products are like AND).

For instance, a List in Scala is defined as an ADT — it is a Cons of a value AND another List, OR an empty List. In Haskell, this could be written simply as:

data List a = Cons a (List a) | Empty

In Scala, at the cost of some more characters, we could encode this as a mini-class hierarchy:

sealed trait List[+A]
case class Cons[A](a: A, rest: List[A]) extends List[A]
case object Empty extends List[Nothing]

This is almost a straightforward Java-style class hierarchy, but notice the sealed keyword: unlike normal OO classes, List cannot be extended, except by the classes below it. Without this feature, the number of possible outcomes would still be totally unbounded. In Java, we can still benefit from using this style, but the code required to manually write accessors, constructors, threading through arguments, correct hashcode/equals implementations and unit tests is significant, and error prone.

OO lore has it that pattern matching is evil, and that subtype-polymorphism is the answer to all questions. This is false; there are complementary pros and cons to subtype polymorphism and pattern matching. Since there are only a few fixed cases, it is perfectly idiomatic and sensible to pattern match on ADTs; the Scala compiler will even complain if we haven’t matched every possible eventuality.

myList match {
  case Cons(a, rest) => println(s"the head is $a, the rest is $rest")
  case Empty => println("Nothing to see here")

ADTs are a handy weapon in our war against buggy code!

Wrapper types

Wrapper types are one of the best ways to avoid the buggy swamplands of code written with bare strings and primitives. In Scala, this starts at almost no effort:

case class Angle(radians: Double) extends AnyVal

“Case classes” in Scala are (mostly) like any other class, except that the compiler will generate useful functionality. Scala will automatically bind the constructor parameter to an immutable field exposed through accessor methods, generate correct equals and hashCode, a default toString, and a pattern matching extractor. Hugely useful.

Value Classes: eliminating runtime overhead

Explicitly saying extends AnyVal makes our class a Value Class. This class won’t even exist at runtime — it provides type safety in the compiler, then vanishes!

Normalising and validating input

We can get all the benefits of classes here – we can define our own operations, and normalise or validate the constructor input. Here are some examples of this technique applied to Angles and Percentages.  Note they are correct-by-construction; any instance of this type is guaranteed to represent a valid and normalised value.

We should make far wider use of wrapper classes:

case class Email(email: String)
case class Password(password: String)
case class SecurityToken(token: String)
case class ConsumerId(id: String)
case class AgentId(id: String)
case class Price(cents: BigInt)

Types are low hanging fruit

While there are no silver bullets, there’s an awful lot of low-hanging fruit just lying around. Let’s pluck it! We can make major improvements in our software quality, even with minor adjustments to our coding style. Code can be easier to reason about, with vastly less ways to fail, at a very low cost.

Types, kindly bestowed upon us by some languages, are a magnificent tool to improve quality. They prove desirable properties of our code; we should make it our business to put as much code in their reach as we can! There will always be a point where types have no more to say, and must pass the quality baton to tests. Consider though, how much less work tests must do, and how much less code they must expend, when entire universes of nonsense have been prohibited from existence.

In particular, by eschewing exceptions, using Algebraic Data Types to model the precise shape of our data, and wrapper types to constrain crude Strings and primitives, we can make immediate gains before we even get to more advanced abstractions like typeclasses and higher-kinded types.

In Java, much of this has been known for a long time, but the language’s lack of support for value-based classes, ADTs and pattern-matching has meant that good practices are often discarded as prohibitively cumbersome or expensive. Regrettably, despite the welcome addition of lambdas, Java 8 provides little respite.

In languages like Haskell and Scala, these methods are so cheap as to be no-brainers; in new projects you have no excuse for passing up these delights!

Either way, I hope that I’ve convinced you of the good news — there are plentiful green fields of easy code-improvement ahead, before we even get close to tough tradeoffs.