One of the nice things about pure functional programming is that we can use mathematical properties and axioms to reason about, simplify and derive functions. A property that I’ve seen crop up a few times is associativity.
Associativity property
Associativity is a property of some binary operators that means the order operators are evaluated does not matter. An operator is associative if:
A familiar example is addition. We can safely write and know we’ll get the correct answer irrespective of the order in which it is evaluated, as . An example of a non-associative operator is subtraction, as .
Two operators are said to associate if they can be evaluated in any order:
Operator associativity
Related to this mathematical property is operator associativity or fixity, which is essentially an exercise in parenthesis-saving. For non-associative operations where order of evaluation matters, we can define the operator as left-associative or right-associative depending on how we want it to evaluate in the absence of parentheses.
For a left-associative operator:
For a right-associative operator:
Applying to programming
In terms of functional programming, a binary operator is a two argument function. We tend to refer to operators as functions used in infix position, or between arguments like a `f` b
, as opposed to function notation, which is the more familiar f a b
arrangement. A function can be associative regardless of the notation used. Our previous example could be rewritten in function notation as (+) ((+) 1 2) 3 = (+) 1 ((+) 2 3)
.
Monoids
Associative functions seem useful for reasoning about folds, particularly when part of a monoid. A monoid is formed by an associative binary function and an identity element that exists such that:
Addition is a monoid for integers, as it is associative and has as an identity value.
When an operator and value form a monoid this can simplify how we think about folds, as:
Monoids also give us the first duality theorem: for monoids and a finite list , . This equivalency means we can decide on which fold to use based purely on efficiency when working with monoids and finite lists.
There seems to be much more to monoids in Haskell, but the point I’m trying to make is that associativity forms the basis for some more advanced and quite useful concepts (functors being another example).
Reducing noise
For maths and programming I’ve always erred on the side of over-specifying parentheses, but as I get more into functional programming I’ve found learning operator associativity starts to really reduce the noise. One example is function composition using the (.)
operator. Because this is an associative operation, we can compose a string of functions without the noise of parentheses: f . g . h
.
Currying
Another important example of noise-reduction is the behaviour of type-mapping operator . Type-mapping is right-associative, which means that is actually equivalent to . This is also known as currying. Let’s take good old integer addition as an example again. Its type is:
Normally we’d call this with two arguments such as 1+2
and get an integer 3
back. But ’s type signature can also be expressed as:
This means can take an integer, and return a function which takes an integer and returns the final result. So in Haskell we can do things like this:
*Main> :type (+)
(+) :: Num a => a -> a -> a
*Main> let addOne = (1+)
*Main> :type addOne
addOne :: Integer -> Integer
*Main> addOne 2
3
Here we’ve given (+)
just one argument, and the result is an addOne
function that will take a single integer and add one to it. The right-associativity of type mapping, combined with the left-associativity of function application, gives us standard calling semantics for the type declaration a -> a -> a
, while also letting us partially apply functions as a -> (a -> a)
. In fact, all Haskell functions are curried (they only take one argument, and return a function to handle the rest), but thanks to operator associativity we can call them in whichever way makes sense for each situation.