Day 4 of Nothin’ but .NET started before 9am and finished after 2:45am (including eating breakfast and dinner in the conference room while still discussing stuff).
The day’s activities
We started out talking about IoC containers, and their role in controlling object creation, lifecycle, autowiring dependencies, and also in dynamic interception for AOP (using Reflection.Emit, RealProxy from System.Runtime.Remoting.Proxies, Dynamic Proxy etc). Then it was up to us to implement a simple container. The class started off trying to convert the poor man’s dependency injection approach we had used to quickly hack everything together. Xerx made a great suggestion that we could simply use Func<object>
delegates as factory methods, and just map requested types to those methods.
Once we had configured all the mappings using our container and removed all the no arg constructors we had used for poor man’s DI, the next challenge was to drive out a fluent interface for application startup. Here I had my first and only success of the week, where I actually managed to test drive out a few classes without doing my normal trick of becoming hopelessly stuck. JP needed to make a few changes to it but it seemed like I ended up fairly close to a reasonable design. I finally felt like I might be making some progress.
We were going to end the day by chatting about and implementing the domain and the service layers, but we started to lose some attendees due to illness and exhaustion, so we decided to defer the domain stuff until the morning (or, technically, later that morning). We did end up having a quick chat about service layer styles as described in Martin Fowler’s PoEA, and contrasted the Transaction Script approach (i.e. a big ol’ procedural method or script) to keeping a thin service layer over a domain. A couple of us worked ‘til the end of class implementing some of the front controller stuff we had skipped from earlier in the course.
In contrast to previous days, after Day 4 I felt like I was finally on track to becoming a better developer. (Spoiler alert: it wasn’t to last :-\)
Writing tests
I continued to build on the testing concepts I had started uncovering on previous days, although I still struggled to apply all them. I covered a lot about trying the "simplest thing that makes sense" instead of the "simplest thing that works" in my summary of Day 3, and this continued to be an important theme for Day 4.
I finally started to get an appreciation of the impact which each part of a test definition has on design. The scenario name, test case name and assertion became the fundamental behaviour and purpose for the SUT’s existence. The "because" block showed why the SUT was exhibiting this behaviour (a call to a particular method). The context/setup was then used to drive the design of the SUT’s collaborators and dependencies and dole out their responsibilities, as required for the SUT to do its job. This then helped us get down to the next level of abstraction.
JP seemed to use the SUT’s dependencies, as setup in the test definition, as axes by which the overall design could be varied. By decomposing the problem the SUT is trying to solve into sub-responsibilities, then pushing these responsibilities down into dependencies, the SUT stayed very clean, small and simple. Finding the right abstraction for these responsibilities (in particular, programming to the API you would like them to have) made it easier to design these dependencies once they became the SUT. Any pain, duplication or smells detected while writing tests became a clear sign that we needed to look for a different abstraction around the SUT’s dependencies. For example, instead of injecting in a dependency with a required behaviour, we might need to inject a factory or other dependency that would figure out the behaviour needed and return the relevant dependency.
I quizzed JP to try and find out the process he used to make all these design decisions which he seemed to effortlessly drive out while writing the tests. A lot of it seemed to come down to the context he has built up over the years by experience, and (somewhat unfortunately for me) down to an amazing talent for spotting and thinking in abstractions. Still, here is the best approximation of his approach that I could come up with:
- What responsibility does this SUT have? This becomes the scenario name.
- Decompose this into sub-responsibilities. If the SUT is responsible for running a command, then it cannot also be responsible for creating or locating that command. This becomes a responsibility of the next level down the abstraction chain.
- Identify collaborators/dependencies required so we can push these other responsibilities to them, rather than burdening the SUT with doing too much. If the other responsibility is creating a command, we might make a design decision to use a factory to do this.
- How should the SUT react under this scenario? The description of this becomes the test title. JP often used long, descriptive titles loaded with design implications. For example, the SUT "should call the run method on the command returned by the command factory". This became a broad overview of the design intention and design decisions made, with the details fleshed out in later steps.
- How do I assert this? Write the test case body in one logical assertion, as simply as possible. It should reflect the fundamental purpose of the SUT’s existence.
- Why does this happen? Write the "because" block, which is basically the method call that triggers the behaviour.
- Work out the context/test setup. Setup the collaborators. During this time you’ll be making design decisions about the responsibilities and behaviours of the dependencies.
Today’s tidbits
Here’s a quick round up of some other miscellaneous things I picked up on Day 4:
- I need to write more code. Lots of code. Tonnes of code! Anything to get some practice in and get more context for my design decisions. Try tackling the same problem with completely different techniques. Try functional style. Try not using dependencies. Try different patterns. Try writing tests differently. Try having no tests. Regardless of the result it will help grow your context on which to base future design decisions.
- Top down design is incredibly powerful for driving down from the required behaviour to the lowest levels of abstraction. Because test-driving each SUT in turn drives out the design of its dependencies, JP ended up with the problem broken down into incredibly simple abstractions. I’m not sure how you could come up with that using bottom-up design, as then your tests don’t really give you any feedback on what the design of the higher level code should be.
- Three essential abilities for OO design: problem decomposition, finding abstractions, and segregation of responsibilities. I have no idea how to learn the first two, but GRASP can probably help with the third.
- Using dependencies as axes to vary the design of the SUT.
- Keep to one logical assertion per test.
- Test context/SetUp can hold design decisions and indirect assertions (e.g. stubbing a return value which is used in the test assertion is a design decision).
- JP said not to think too far ahead, because “it will kill you”. Keep at the current level of abstraction, and break the problem down. Don’t think “what if?”, think “what now?”.
- Concentrate on writing tests around the "happy" path. This helps to focus on the SUT’s single responsibility, and defer exception handling etc. to different levels of abstraction.
- Focus on outside-in testing using dependency injection. By passing dependencies in we can assert their state or calls made to them by the SUT, therefore making previously untestable internal state testable.
- Static gateways may need direct access to Service Locator (IOC, or abstraction over our container).
- Orchestrator pattern, an object that takes care of the sequence of operations in a pipeline.