Last post we looked at using a less general form of the Free monad to purely represent side-effects in F#. Because Haskell supports higher-order polymorphism it makes using this approach much easier. Here is the complete example from that post, rewritten in Haskell:
Previously we looked at using IO without side-effects in C# by deferring the execution of side-effects. Rather than immediately performing IO, we wrapped up side-effecting operations in an
IO type and used combinators like
SelectMany to work within that type, so we could use
IO values without having to give up the benefits of pure functions by executing the side-effect.
This is a useful technique, but it has the drawback that the
IO instances assembled with these combinators are opaque – there is no way for us to inspect them and work out what the represent. We know an
IO<String> is some IO operation that will result in a string, but is it
In this post we’ll look at another way of representing side-effecting (and other) operations that addresses this drawback.
"I hope that one day, the business needs a string calculator. Then I can say "This is the moment I trained for my whole life!"" – Michael Stum (@mstum), tweet
An effective string calculator is obviously indispensable to any software project. I have attempted this before, but one can never be too prepared, and so I thought I’d revisit it. But this time I thought I’d try it using Parsec, a parser combinator library for Haskell.
filter and it’s monadic cousin
This is my attempt to understand the relevance of the differences between these two functions. Please leave a comment to let me know about anything I’ve misunderstood. :)
I have avoided seriously trying F# for years now, mainly because:
- F# was described as a "functional programming language", and I didn’t know FP. I was keen to learn FP, but prioritised learning more about OO design and patterns that seemed more immediately applicable to my everyday work.
- The syntax looked really confusing.
- Whenever I heard F# mentioned it was in the same breath as "financial data processing" or some other niche area that seemed to have little to do with the types of applications I wrote.
I carried these hastily-acquired preconceptions around for years, until this year I needed to do a small application for work, and decided to try it in F#. To my surprise I found that none of these preconceptions were valid! What’s more, I actually quite enjoyed it. F# seemed to let me do everything I would normally do in C#, only with less code, and with more powerful features waiting in the wings should I want to dabble with them.
So in this post I wanted to go through why these assumptions were false, just in case they are holding you back too. I think F# is well worth trying out for every developer that does anything with .NET, but rather than trying to sell you on why you should try F#, I’m going to focus on the reasons you may think that you shouldn’t, and trust your natural developer curiosity to do the rest. ;)
Last time I donned my mad Haskeller lab coat we ended up using arrows to pipe the output of two functions into a tuple. This time I’m going to look at piping a single input through a list of functions to get a list of output.
The motivation for this experiment was a small section of a code snippet I found in Chris Wilson’s From Ruby to Haskell, Part 2: Similarity, Refactoring, and Patterns post:
As far as I can tell there’s nothing at all wrong with this. It is creating a list of values by passing
e to several functions. It did get me thinking though – do we have to explicitly pass in that
e argument to every function? To the laboratory!
How can we do inherently side-effecty things like IO when using functional programming, where each call has to be side-effect free?
In this post we’ll look at side-effects and how we could eliminate them from our code (we’ll use C#, but we can apply the idea to many languages). The aim is not to get something enormously practical that we’ll use every day, but instead to explore some ideas and hopefully work out how it is possible to get any useful work done using functional programming where side-effects are forbidden.
One thing I often hear about functional programming is that its requirement for immutability makes programs easier to reason about. To me this seems intuitively true – it’s got to be easier to work out what a program does without having to keep track of changing state while evaluating programs, right?
I wanted to challenge my assumptions about this. Could I convince myself that one is unambiguously easier to reason about? And if so, what is it about the other that makes it more difficult to reason about?
To do this I tried tracing through some examples of mutable and immutable data structures. I tried to use similar, "object" styles for both, so that the characteristic difference between them was the mutability of their internal data (rather than getting thrown off by differences between functional and OO styles).
I’d love to get your thoughts for and against these ideas. I am especially likely to be the victim of confirmation bias on this one, so I’m counting on you to keep me honest. Leave comments or send me email please! :)
I’ve just started playing around with F#, and I wanted to test some of my code. Here’s an introduction to the absolute basics of getting up and running.
I love finding neat little bits of Haskell that do things in ways I haven’t really thought of before. This usually happens when I come across a simple yet slightly clumsy way of doing something, and embark on some mad experiments to find alternative approaches (usually ending in a trip to #haskell.au on Freenode). These alternatives may not result in anything usable, but they often prove to be fun learning experiences for me.
A recent example of this was the following adventure in passing the same input to two functions, and getting the output as a tuple.