Day 3 of Nothin’ but .NET was a bit more laid back than previous days as we spent most of it in teams trying to apply some of the stuff we had learned. We had an early finish (10:45) in preparation for a longer day on Thursday.
I actually found this day immensely frustrating. After seeing so much cool stuff the previous two days and feeling like I was starting to understand the main concepts, it was completely demoralising to be given a fairly basic problem and utterly fail to make even the vaguest hint of progress with it. The only reason I didn’t feel completely incompetent was because of the overwhelming feeling of stupidity I had, and it didn’t feel quite right for someone so obviously stupid to think of a big word like "incompetence". :\ :)
This wasn’t helped by the fact that as soon as JP started demonstrating how to proceed to the next stage, he made the solution seem so obvious and effortless. At the start of the course JP went to great lengths to encourage us not to compare ourselves to or compete with other developers, and that instead we should just aim to better ourselves one small step at a time. But in this case comparing JP’s work to mine wasn’t like comparing the work of two developers. It was more like comparing the works of Leonardo da Vinci to those of a small, under-watered cabbage. It was fairly difficult not to notice the difference.
Despite this the under-watered cabbage did manage to pick up a few things from this day. First up I got to see an end-to-end, test-driven development of a Front Controller architecture for processing web requests using Commands. The Front Controller itself was more of a component – it consisted of several classes all grouped together to perform the front controller related tasks. It’s a bit like a "layer", although the architecture wasn’t really layered in a traditional sense. It was just a bunch of components working together in a pretty loosely coupled way. Not having a more traditional layered approach seemed to make the design much more flexible.
FrontControllerimplementation (confusing because it is just one part of the entire Front Controller component). This class’s responsibility is to receive an
IncomingRequest(an abstraction of an HTTP get or post) and run a
Commandthat will do whatever this request is asking.
I also picked up a couple of TDD tips. The first was to start with a test case that reflects the simplest, most fundamental description of the subject under test’s (SUT’s) behaviour, rather than asserting little facts about the SUT. For example, when writing a test for when our
FrontController is handling a request, we shouldn’t start by asserting it gets a non-null
Command from its
CommandFactory, or assert that
CommandFactory.Create() was called on a mock object. Instead our test was that it "should tell the command that can process the request to process it".
As I started picking up on later in the week, the former is really focussed on the mechanics of an implementation, while the latter is about the required behaviour. By being very descriptive about the behaviour, we end up driving out a lot of design in that one statement (in this case, the design decision is that we need a collaborator that can return a command that can process a request, and that our SUT will run that command). This technique also encourages the use of a very simple, targeted assertion in code, which lets us defer design decisions about the SUT’s dependencies until we start writing the context/SetUp method for the test.
This really comes down to the identification and segregation of responsibilities (as per the Single Responsibility Principle (SRP)). This process is really helped by attempting to identify the most abstract responsibilities beneath the SUT, while ensuring the SUT is still responsible for adding some behaviour. Or, put another way, the SUT should have only one small part to play in achieving it’s overall reason for existence – this is its single responsibility. Everything else is deferred to the implementation of its collaborators.
Another TDD technique JP used was doing the "simplest thing that makes sense", instead of the "simplest thing that could possibly work". For the
FrontController example, the simplest thing that could work when initially coding it would be injecting a single
Command into the
FrontController and asserting that the command was run. Then what? Move on to another SUT and leave a very defficient implementation of our
FrontController? In this case, we know with absolute certainty that this class will need to process more than one type of
Command, so the simplest thing that makes sense is for the
FrontController to get a relevant command implementation from a
CommandRegistry or similar type, rather than hard-wiring in a single command. This not only gives our
FrontController less reasons to change (as per the Open Closed Principle (OCP)), but it also points out the next SUT we can drive out, the
Commandand assert it’s run method was called. The test passes, and the next step is to look for refactoring opportunities. We notice the OCP violation, and refactor to introduce the
CommandFactory, all under the protective cover of our passing test. I have a suspicion that it might be more reliable to think about the problem in abstract terms and have a SOLID design naturally fall out than to do the simplest thing that could work, and then run it through the gauntlet of SOLID principles, GRASP patterns etc. Still, I find it comforting that if I miss the abstraction up front I still have a chance to get there via refactoring.
Here is some other stuff that came up during day 3:
- A long context/SetUp for a test is a test smell that indicates we’re probably doing too much. Push some of it down into other collaborators so we can defer decisions about it.
- Creating something is a responsibility. If a SUT needs to create something and act on it, then the creation should probably be pushed out into a Factory. The SUT is then only responsible for using the factory and its output (i.e. mediating between the two types).
- When designing and writing tests, focus on abstraction and intention rather than focussing on implementation and mechanics. Yes, I’ve written this already in this post. No, I probably haven’t stressed it enough.
- If stubbed values are required for a test to work then these are tested implicitly when the test runs. In our
FrontControllerexample where it uses a
CommandRegistryto get a
Commandand call its
Run()method, we don’t need to explicitly test that
CommandRegistry.Create()was called. Instead we can stub out
Create()to return a specific
Commandinstance, and assert that its
Run()method was called. We don’t need to explicitly assert that the factory was used if the test’s assertion already depends on it. This is a side-effect of identifying the simplest, fundamental assertion for a SUT rather than thinking about the implementation mechanics.
- By constraining ourselves to one ViewModel per View we can use convention over configuration to wire everything up.
- Went through the concept of a Service Layer, which is a type of Facade for operating on the domain.
- Went over Command Query Separation (CQS)
- Covered Separated Interface, where an implementation lives at a lower level than the interface itself. An example is a
Queryinterface which is defined somewhere with visibility from all layers/components, but the implementation within the domain or by the ORM or persistence layer/component. This is why its ok to use NHibernate’s criteria interfaces from pretty much anywhere within the application, provided the implementation is abstracted appropriately into a lower layer. The Dependency Inversion Principle (DIP) is used as a guide for applying the Separated Interface pattern (i.e. higher levels should not depend upon lower levels, but instead on interfaces using the Separated Interface pattern).