Responsibility - The Quantum of Design

It turns out I really can't bang this drum enough.

This weekend I tweeted three simple rules for determining if a class's responsibility has been properly assigned. Class responsibilities were on my mind for a significant portion of the day on Saturday. It wasn't anything conscious or intentional. Rather I think it was a sort of run-off from a lot of responsibility talk over the course of the previous week.

Lately I find that whenever someone asks my opinion about a particular design decision, the question of responsibility is the first place I go. It's far and away the most effective and powerful thing to ask, as it easily gets the most guaranteed bang for the buck.

At both the class and the method level, the most important factor in deciding where and how to implement a particular bit of functionality is whose responsibility it is. Likely, the whole picture of the task is made up of several bits of responsibility that belong in several places. If your application is architected in such a way that responsibilities are well isolated and assigned to layers and clusters of classes, then breaking the task down into those constituent responsibilities gives you a draft blueprint for how to get the job done.

If your application is not fortunate enough to have such a clear and well articulated architectural scheme... well I'd advise you to make it a goal. Start by learning about onion architecture. But in the meantime, try to keep in mind the rules of thumb below and evaluate your classes against them as you work. If they don't measure up, see if they can be improved incrementally as you work. Leave them cleaner than you left them.

Now, without further ado, the rules.

  1. If you can't sum up a class's responsibility in 2 sentences, shift your level of granularity. It almost certainly has more than one responsibility, and you need to think hard about why you've combined them.
  2. If you can't sum up a class's responsibility in a single sentence, then you have lots of room for improvement. Don't be satisfied until you have a clear, concise statement of the class's responsibility.
  3. If you can sum up a class's responsibility in a single sentence, go a step further and think hard about whether it has another unstated or assumed responsibility. Unit tests can really help here.

The message here is not subtle. Every class needs to have a clear answer to the question "what is my responsibility". This is, in my experience, the most important factor in establishing clean, clear, composable, testable, maintainable, comprehensible application design and architecture. It's not the end of the road, but it's a darn important step on the way.

Don't Stop at DRY

I've been thinking a lot about the DRY principle lately. You know the one. DRY: Don't Repeat Yourself. It's a principle that was first made popular in the book "The Pragmatic Programmer". Unlike much of the rest of the content of the book, DRY is something that appears to have penetrated fairly deeply into the collective consciousness of the greater programming community. In a certain respect, this is a huge success on the part of Andy Hunt and David Thomas. To have disseminated so completely such a fundamental rule of programming is surely something to be proud of.

Note what I just said. DRY is "a fundamental rule of programming." I suspect very few people with appreciable experience would reasonably and honestly dispute this claim. But a rule is a funny thing, and a fundamental one doubly so. Rules are a first-pass at quality. They establish a minimum level of ensured quality by being a reasonable course of action in the vast majority of cases, at the expense of being a sub-optimal course of action in many cases, and even a plain bad option in some.

I am by no means saying that DRY is a bad thing. I'm not even saying not to do it. But I am saying that applying DRY is a beginner's first step toward good design. It's only the beginning of the road, not the end.

Eliminating repetition and redundancy is a very natural and obvious use for programming. It's especially hard to deny for so many people who slide into the profession by way of automating tedious and repetitious manual labor. It just makes sense. So when you tell a new programmer Don't Repeat Yourself, they embrace it whole-heartedly. Here is something that a) they believe in, b) they are good at, and c) will automatically achieve a better design. You'd better believe they are going to pick up that banner and wave it high and proud.

This is a good thing. If you can get your team to buy in to DRY, you know you'll never have to worry about updating the button ordering on that dialog box only to find that it hasn't changed in 3 scenarios because they're using a near identical copy in another namespace. You know that you'll never have to deal with the fallout of updating the logic that wraps up entered data for storage to the database and finding inconsistent records a week later because an alternate form had repeated the exact same logic.

What you might run into, however, is:
  1. A method for rendering currencies as user-friendly text which is also used to format values for XML serialization. The goal being to keep all text serialization centralized regardless whether it's meant for display or storage.
  2. Views, controllers, configuration, and services all using the same data objects, regardless of the dramatically different projections necessary for every operation. This in the interest of avoiding redundant structures.
  3. You might find that your views, controllers, and persistence layers all depend directly on a single tax calculation class. Of course here the goal is very simply to centralize the business logic, but actively works against establishing a proper layering in the application.
These are all very sub-optimal, or even bad, design decisions. They are all examples that I have seen with my own eyes of decisions made in the name of DRY. But DRY itself is not the cause. The problem is that the people responsible for these decisions have unknowingly pitted DRY against other quality design rules and principles.

DRY focuses on behavior. That is to say algorithms. "Algorithms", along with "data", are the fundamental building blocks of applications. They are the substance our work is made of. But trying to build complex software while thinking only or primarily at this elementary level is like trying to map a forest one tree at a time.

At some point, the programmer must graduate from the rule of DRY to the nuance of responsibility. The Single Responsibility Principle (SRP) is the cardinal member of the SOLID family. It's premier position is not simply a coincidence of the acronym. It's also the next most important rule to layer onto your understanding of good design.

Responsibility is about more than function. It's also about context. It's about form and purpose. Identifying and isolating responsibilities allows you to take your common DRY'ed functionality, and cast it through various filters and projections to make it naturally and frictionlessly useful in the different places where it is needed. In a very strict sense, you've repeated some functionality, but you've also specialized it and eliminated an impedance mismatch.

Properly establishing and delimiting responsibilities will allow you to:
  1. Take that currency rendering logic and recognize that readability and storage needs present dramatically different contexts with different needs.
  2. See that the problem solved by a data structure is rarely as simple as just bundling data together. It also includes exposing that data in a form convenient to usage, which may vary dramatically from site to site.
  3. Recognize that calculation is worth keeping DRY, but so is the responsibility to trigger such calculation. It can be both triggered and performed in the business layer, and the result projected along with contextual related business data to wherever it needs to be.
By layering responsibility resolution on top of the utilitarianism of DRY, you take a step back from the trees and can begin to manage a slightly wider view of the problem and solution spaces. This is the key and crucial lesson that all beginning programmers must learn, once they've mastered the art of DRY. Once again, DRY is fundamental and indispensable. It's one step along the path to wisdom and, in general, should not be skipped or omitted. But it's not an end in itself. So don't stop when you get DRY. Start looking for the next step. And consider responsibility resolution as a strong candidate.

Mindfulness

You are faced with an unfamiliar coding problem. You know what you're "supposed to do"--use this pattern, follow that convention--but you don't know why. And the guidance seems ever so slightly... off. The context is just a bit different. The constraints are a bit tighter here. A bit looser there. The peg doesn't quite fit in the hole, but everyone is telling you it should.

Everyone is telling you "this is how you do it" or "this is how we've always done it". You think you can follow the rule, but you don't think it will be pretty. What do you do?

I have witnessed many people in this situation make their choice. Most commonly they do whatever it takes to follow the rule. If the peg doesn't fit, they pound it till it's the right shape. If the cloth won't cover, they'll stretch, fold, and tear creatively until they can make a seam. Somewhat less often, they just toss out the advice and do it the way they are comfortable with. Often that just means "hacking out" an ad hoc solution.

In either case, the person learns nothing. They will repeat their struggle the next time they face a situation that is ever so slightly different. They will sweat, stress, and hack again. Over and over, until someone shows them a new rule to follow.

As students we are taught to learn first by rote, then by rule. If we are lucky, we are eventually tested on whether we have learned to think. And most commonly we manage to slog it out and do well enough to pass without actually being required to think. It is so very easy to become comfortable with this model of learning. A particular someone, vested with the responsibility of fertilizing our minds and nurturing the growth of understanding, knows the answers. We don't understand, but someone can at least tell us when we are right or wrong, and often that's enough.

We settle into dependence. And by settling, we establish roadblocks in our path before we even set foot on the road. By settling, we refuse to take ownership of our own knowledge. We put ourselves at the mercy of those who know more and are willing to share of their time and understanding to help us overcome obstacles of our own creation.

This situation is not inevitable. You can avoid dooming yourself to toil under it with a very simple determination. But make no mistake, this simple determination will require determination. When you face the prospect of doing something that you do not understand, stop, take note, and ask "Why?" Refuse to continue on with any course of action until you know why you are doing it.

I'm not talking about questioning authority here (though that's good too). I am advocating understanding. If you think there's any chance you may have to face a similar situation again, then as a professional developer it behooves you to understand what you're doing this time, and why. This prepares you firstly to defend your actions, and secondly to tackle similar but different problems in the future.

By recognizing the reasoning behind a particular prescribed course of action, when you encounter a similar situation in the future you will be able to identify the subset of the problem that is subject to the prescription. Seeing this allows you to conceptually distinguish that part of the problem from the rest. From this vantage point you can decide whether the remainder is just noise that has to be accommodated, or something more significant. You will be able to start to consider whether there is another layer or dimension to the problem which might be better served by a different or additional pattern. You will be able to think, intelligently, and intentionally, about the problem both as a whole, and in part.

Lack of mindfulness is the scourge of intellectual pursuits (and a great many other things in life). Whether in programming, in health, in investment, etc., it binds you to the service of rules and systems. It puts you at the mercy of those who have understanding, and under the thumb of those who own the systems. Benevolent or otherwise, do you really want your own success and satisfaction to come at the whim of someone else, for no other reason than that you couldn't be bothered to put in the effort to understand? Do you want to spend your career tromping the same grounds over and over again, never coming to any familiarity or understanding of the landscape?

Always ask, "Why?" Then take the time to understand. Always be deliberate and intentional about applying any solution. Don't just follow the directions of the crowd, or some authority. You're not a patient taking orders from your doctor. This is your domain. Own your knowledge. Your future self will thank you.

Take "Single Responsibility" to the Next Level

The Single Responsibility Principle (SRP) is a crucial tool in your toolbox for managing complexity. Bob Martin has a great essay on the Single Responsibility principle which expresses one of the biggest benefits that it can deliver to you. The SRP is predicated upon the unfortunate reality that changing code is the biggest source of bugs. By extension, the easiest way to avoid bugs is to ensure that whenever you have to make a change, it affects as little of the code as possible.

This is well known among experienced developers, but as Martin notes at the end of his essay, it's extremely difficult to get right. In my experience, very few devs take the principle as far as they should. Especially considering the fact that most of us were taught that Object-Oriented Design is all about bundling up code that works together, it can be easy to get lulled into confidence about what it takes to truly adhere to SRP.

Martin says, "conjoining responsibilities" comes naturally to us. Programmers are finely-honed pattern matching machines. It often comes far too easily. Most of us begin our careers weaving responsibilities and dependencies throughout our code, in the interest of solving a problem as quickly and efficiently as possible... with bonus points for being clever. Then we start learning the SOLID principles and start to adjust our coding habits. We take SRP to heart and break up responsibilities along reasonable lines, even if it means writing a bit more code, and synchronizing state that could theoretically be shared if all things were equal.

Then we stop.

We move our logging out of exception handlers and into service classes. But we leave severity conditions and entry formatting mixed in with the filestream management. We separate our business logic from our UI rendering and input handling. But we pile up unrelated domain operations in our controllers and presentation models. We shrink our class sizes from 1500 lines to 500 and claim victory.

This is not enough.

For one thing, responsibilities that often seem naturally elemental can usually be broken down yet further. The log example is a perfect one. File stream management is a responsibility complex enough to be given its own class. Text formatting is yet another. And severity handling is something that can be configured apart from the other two aspects. Oh, and don't forget that interfacing with an unmockable framework resource class such as System.IO.Filestream is a worthy role all its own. Each of these is a responsibility that can be wrapped up in a small class of just 100 or so lines, and exposed with a simple fine-grained interface of just a few methods. Compose these together and you have a logging service that's flexible, highly testable in all aspects, and most importantly, can evolve and be maintained independently along several orthogonal dimensions, without interfering with the other functionality. And on top of all this, you get the automatic benefit that it's much more friendly to dependency injection and unit testing.

The other important lesson to learn is that SRP doesn't just apply to classes but also to methods. It's almost never necessary for a method to be more than 30 or so lines long, on the outside. A method of such restricted size will inevitably have fewer arguments to worry about, for starters. And further, it almost inherently prevents spaghetti code. Purely by the act of breaking out small operations of a handful of steps apiece, and giving each a highly specific and expressive name, you can avoid the "flying V" of nested loops and conditionals. You can avoid long-lived context variables and status flags. You can avoid stretching tightly coupled cause and effect relationships across multiple screens-worth of code. And you'll likely find yet more places to naturally slice up a class into separate responsibilities.

All of these help to control sprawling complexity. This sounds unintuitive, because if you follow the rough rule of 30 lines per method, 200 lines per class, you'll almost certainly end up writing far more classes, and probably more code in general. But you will always know exactly where to find code related to any particular thing that may go wrong. And you can always be certain that the portion of the application that is affected by a change to a particular unit of functionality will be highly constrained by virtue of the reduced number of interactions and dependencies that any one unit needs to worry about.

Consider what you can do to take the next step with SRP. Don't be satisfied with the first-order effects of a naive application of the principle. Rededicate yourself, try it out, shrink your units, and see the benefits in your code.

A Taxonomy of Test Doubles

Many words have been written in the TDD community about the myriad ways of mocking in service of unit tests. After all this effort, there remains a great deal of confusion, ambiguity, in the understanding of many--maybe even most--developers who are using mocks.

No less than the likes of the eminently wise Martin Fowler has tackled the subject. Fowler's article is indispensible, and it in large part built the foundation of my own understanding of the topic. But it is quite long, and was originally written several years ago, when mocks were almost exclusively hand-rolled, or created with the record/replay idiom that was popular in mocking frameworks before lambdas and expressions were added to C# and VB.NET with Visual Studio 2008. Add to that the fact that the article was written in the context of a long-standing argument between two different philosophies of mocking.

Unfortunately these arguments continue on even today, as can be seen in the strongly-worded post that Karl Seguin wrote last week. Looking back now, with several more years of community experience and wisdom in unit testing and mocking behind us, we can bring a bit more perspective to the discussion than what was available at that time. But we won't throw away Fowler's post completely. Within his post, there are firm foundations we can build on, in the definitions of the different types of mocks that Fowler identified.

There are four primary types of test doubles. We'll start with the simplest, and move through in order of ascending complexity.

Dummies

A dummy is probably the most common type of test double. It is a "dumb" object that has no real behavior. Methods and setters may be called without any exception, but without any side-effect, and getters will return default values. Dummies are typically used as placeholders to fill an argument or property of a specific type that won't actually be used by the test subject during the test in question. While a "real" object wouldn't actually be used, an instance of a concrete type may have strings attached, such as dependencies of its own, that would make the test setup difficult or noisy.

Dummmies are most efficiently created using a mock framework. These frameworks will typically allow a mock to be created without actually configuring any of the members. Instead they will provide sensible defaults, should some innocuous behavior be necessary to satisfy the subject.

Stubs

A stub is a test double which serves up "indirect input" to the test subject. An indirect input is information that is not provided to an object by the caller of its methods or properties, but rather in response to a method call or property access by the subject itself, to one of its dependencies. An example of this would be the result of a factory creation method. Factories are a type of dependency that is quite commonly replaced by a stub. Their whole purpose is to serve up indirect input, toward the goal of avoiding having to provide the product directly when it may not be available at the time.

Stubs tend to be quite easy to set up even with more primitive mocking frameworks. Typically, all that is needed is to specify ahead of time the value that should be returned in response to a particular call. The usual simplicity of stubs should not be taken as false comfort that the doubles are not too complicated, however. Stubs can get quite complex if they need to yield a variety of different objects multiple calls. The setup for this kind of scenario can get messy quick, and that should be taken as a sign to move on to a more complex type of double.

Mocks

A mock is a type of test double that is designed to accept and verify "indirect output" from the subject class. An indirect output is a piece of information that is provided by the test subject to one of its dependencies, rather than as a return value to the caller. For example, a class that calls Console.WriteLine with a message for printing to the screen is providing an indirect output to that method.

The term "mock" for a particular type of test double is in a certain way unfortunate. In the beginning there was no differentiation. All doubles were mocks. And all the frameworks that facilitated easy double creation were called mocking frameworks. The reason that "mock" has stuck as a particular type of double is because in those beginning times, most test doubles tended to take a form close to what we today still call a "mock". Mocks were used primarily to specify an expectation of a particular series of method calls and property access.

These "behavioral mocks", or "classical mocks" as Fowler calls them, gave birth to the record/replay idiom for mock configuration that reached its peak in the days of RhinoMocks. And due to the tendency of inexperienced developers to create complicated object interactions and temporal coupling, mocks continue to be a very popular and common form of test double. Mocking frameworks make it far easier to unit test classes that rely on these types of coupling. This has led many to call for the abolishment of mocks and mocking frameworks in a general sense, claiming that they provide a crutch that makes it too easy to leave bad code in place. I'm sympathetic to the sentiment, but I think that this is throwing the baby out with the bathwater.

Fakes

Fakes are the most complicated style of test double. A fake is an object that acts simultaneously as both a stub and a mock, providing bidirectional interaction with the test subject. Often fakes are used to provide a substantial portion of the dependency's interface, or even all of it. This can be quite useful in the case of a database dependency, for example, or a disk storage service. Properly testing an object that makes use of storage or persistence mechanisms often requires testing a full cycle of behavior which includes both pushing to and pulling from the storage. An in-memory fake implementation is often a very effective way of avoiding relying on such stateful storage in your tests.

Given their usefulness, fakes are also probably the most misused type of test double. I say this because many people create fakes using a mocking framework, thinking they are creating simple mocks. Or worse, they knowingly implement a full-fledged fake using closures around the test's local variables. Unfortunately, due to the verbosity of mocking APIs in static languages, this can very easily become longer and more complex code than an explicit test-specific implementation of the interface/base class would be. Working with very noisy, complicated, and fragile test setup is dangerous, because it's too easy to lose track of what is going on and end up with false-passes. When your test's "arrange" step starts to overshadow the "act" and the "assert" steps, it's time to consider writing a "hand-rolled fake". Hand-rolled fakes not only remove brittle and probably redundant setup from your tests, but they also often can be very effectively reused throughout all the tests for a given class, or even multiple classes.

It's not Just Academic

These are the primary categories into which nearly all, if not all, test doubles can be grouped. Fowler did a great job of identifying the categories, but I think this crucial information is buried within a lot of context-setting and illustration that doesn't necessarily offer great value today. Mocking is ubiquitous among the subset of developers that are doing unit testing. But too many people go about unit testing in an ad hoc fashion, rather than deliberately with a plan and a system for making sense of things. I believe that a simple explanation of the major types and usages of test doubles, as I've tried to provide here, can aid greatly in bringing consistency and clarity of intent to developers' unit tests. At the very least, I hope it can instill some confidence that, with a little discipline, pattern and reason can be found in the often messy and overwhelming world of unit testing.

YAGNI Abuse

Have you ever proposed a code change or a course of action in a project for the purpose of improving the stability and maintainability of the code base, only to have someone dispute the need on the basis of

YAGNI

? I was flummoxed the first time this happened to me. Since then I've learned that it's not at all rare, and in fact may even be common.

The YAGNI principle is a wonderful thing. Used properly, it can have a huge beneficial impact on your productivity, your schedule, and on the maintainability of your product. But like so many other important ideas in the history of software development, YAGNI has become a poorly understood and misused victim of its fame. Through constant abuse it has become difficult to communicate the sentiment that it was intended for without a thorough explanation. And I can't count the number of times I've heard YAGNI cited in a completely incorrect or even dangerous way.

The term "YAGNI" has fallen prey to a similar disease as "agile". People often invoke it as an excuse not to do something that they don't want to do. Unfortunately, this quite often includes things that they

should

do. Things that have long constituted good design, and good software engineering practice. A few examples of things that I have personally been horrified to hear disputed on YAGNI grounds include:

These are all activities that are strongly valued and diligently practiced in the most productive, successful, and small-A-agile software development organizations and communities. For myself and many of you out there, it's patently obvious that this is a subversion and abuse of the YAGNI strategy. Your first instinct in response to this kind of misuse is to say, with no little conviction, "that's

not

what YAGNI means."

This, of course, will not convince anyone who has actually attempted to use the YAGNI defense to avoid good engineering practices. But to refute them, you needn't rely solely on the forceful recitation of the principle as they do. Fortunately for us, YAGNI is not an elemental, indivisible tenet of software engineering. It did not spring fully-formed from Ron Jeffries' head. Rather it is based in experience, observation, and analysis.

It is clear from reading

Jeffries' post

on the XP page that the key to sensible use of the YAGNI principle is remembering that it tells you not to add something that you think you might or probably will need, or even certainly will need, in the future. YAGNI is a response to the urge to add complexity that is not bringing you closer to the immediate goal. Particularly common instances of true YAGNI center on features that haven't been identified as either crucial or wanted, such as configurability, alternative logging mechanisms, or remote notifications.

Looking at my original list, it is clear that none of these things truly add complexity in this way. A very naive metric of complexity such as "number of code entities" may seem to indicate the opposite. But these are actually all established and reliable methods for

controlling

complexity. What these techniques all have in common is that they restrict the ways in which parts of your program are allowed to interact with other parts of your program. The interaction graph is the most dangerous place for complexity to manifest in a program because it compounds the difficulty of changing any one part of the application without affecting the rest of it like a row of dominoes. The practices I identified above, which are so often refuted as "adding complexity", are some of the many ways to guide your application toward this:

And away from this:

There is a multitude practices, patterns, and design principles that help keep your modules small, their scopes limited, and their boundaries well-defined. YAGNI is one of them, but not the only one. Claiming YAGNI to avoid this kind of work is

"not even wrong"

. Not only are you

gonna

need it, but you

do

need it, right from day one. Working without these tools is seeding the ground of your project with the thorns and weeds of complexity. They provide you with a way to keep your code garden weed-free. In this way they are kin to YAGNI, not its enemy. Claiming otherwise reveals either a disrespect for, or a lack of understanding of, the benefits of good design and engineering practices in a general sense. So next time someone sets up this contradiction in front of you, don't let them get away with it. Show your knowledge, and stand up for quality and craft.

Retrospective on a Week of Test-First Development

Any programmer who is patient enough to listen has heard me evangelizing the virtues of Test-Driven Design. That is, designing your application, your classes, your interface, for testability. Designing for testability unsurprisingly yields code which can very easily have tests hung onto it. But going beyond that, it drives your code to a better overall design. Put simply, this is because testing places the very same demands on your code as does incremental change.

You likely already have an opinion on whether that is correct or not. In which case, I'm either preaching to the choir, or to a brick wall. I'll let you decide which echo chamber you'd rather be in, but if you don't mind hanging out in the pro-testability room for a while, then read on.

Last week I began a new job. I joined a software development lab that follows an agile process, and places an emphasis on testability and continuous improvement. The lead architect on our development team has encouraged everyone to develop ideally in a test-first manner, but I'm not sure how many have taken him up on that challenge. I've always wondered how well it actually works in practice, and honestly, I've always been a bit skeptical of the benefits. So I decided this big change of environment was the perfect opportunity to give it a shot.

After a week of test-first development, here are the most significant observations:
  1. Progress feels slower.
  2. My classes have turned out smaller, and there are more of them.
  3. My interfaces and public class surfaces are much simpler and more straightforward.
  4. My test have turned out shorter and simpler, and there are more of them.
  5. I spent a measurable amount of time debugging my tests, but a negligible amount of time debugging the subject classes.
  6. I've never been so confident before that everything works as it is supposed to.

Let's break these out and look at them in detail.

1. Progress feels slower.

This is the thing I worried most about. Writing tests has always been an exercise in patience, in the past. Writing a test after writing the subject means sitting there and trying to think about all the ways that what you just wrote could break, and then writing tests for all of them. Each tests include varying amounts of setup, and dependency mocking. And mocking can be tough, even when your classes are designed with isolation in mind.

The reality this week is that yes, from day to day, hour to hour, I am writing less application code. But I am re-writing code less. I am fixing code less. I am redesigning code less. While I'm writing less code, it feels like each line that I do write is more impactful and more resilient. This leads very well into...

2. & 3. My classes have turned out smaller, and there are more of them.
My interfaces and public class surfaces are much simpler and more straightforward.

The next-biggest worry I had was that in service of testability, my classes would become anemic or insipid. I thought there was a chance that my classes would end up so puny and of so little presence and substance that it would actually become an impediment to understandability and evolution.

This seems reasonable, right? Spread your functionality too thin and it might just evaporate like a puddle in dry heat. Sprinkle your functionality across too many classes and it will become impossible to find the functionality you want.

In fact the classes didn't lose their presence. Rather I would say that their identities came into sharp and unmistakable focus. The clarity and simplicity of their public members and interfaces made it virtually impossible to misuse them, or to mistake whether their innards do what they claim to. This enhances the value and impact of the code that consumes it. Furthermore it makes test coverage remarkably achievable, which is something I always struggled with when working test-after. On that note...

4. My tests have turned out simpler, and there are more of them.

The simple surface areas and limited responsibilities of each class significantly impacted the nature of the tests that I am writing, compared to my test-after work. Whereas I used to spend many-fold more time "arranging" than "acting" and "asserting", the proportion of effort this step takes has dropped dramatically. Setting up and injecting mocks is still a non-trivial part of the job. But now this tends to require a lot less fiddling with arguments and callbacks. Of course an extra benefit of this is that the test are more readable, which means their intent is more readily apparent. And that is a crucial aspect of effective testing.

5. I spent a measurable amount of time debugging my tests, but a negligible amount of time debugging the subject classes

There's not too much to say here. It's pretty straightforward. The total amount of time I spent in the debugger and doing manual testing was greatly reduced. Most of my debugging was of the arrangement portions of tests. And most of that ended up being due to my own confusion about bits of the mocking API.

6. I've never been so confident before that everything works as it is supposed to.

This cannot be overstated. I've always been fairly confident in my ability to solve problems. But I've always had terrible anxiety when it came to backing up correctness in the face of bugs. I tend to be a big-picture thinker when it comes to development. I outline general structure, but before ironing out all the details of a given portion of the code, I'll move on to the interesting work of outlining other general structure.

Test-first development doesn't let me get away with putting off the details until there's nothing "fun" left. If I'm allowed to do that then by the time I come back to them I've usually forgotten what the details need to be. This has historically been a pretty big source of bugs for me. Far from the only source, but a significant one. Test-driven design keeps my whims in check, by ensuring that the details are right before moving on.

An Unexpected Development

The upshot of all this is that despite the fact that some of the things I feared ended up being partially true, the net impact was actually the opposite of what I was afraid it would be. In my first week of test-first development, my code has made a shift toward simpler, more modular, more replaceable, and more provably correct code. And I see no reason why these results shouldn't be repeatable, with some diligence and a bit of forethought in applying the philosophy to what might seem an incompatible problem space.

The most significant observation I made is that working like this feels different from the other work processes I've followed. It feels more deliberate, more pragmatic. It feels more like craft and less like hacking. It feels more like engineering. Software development will always have a strong art component. Most applied sciences do, whether people like to admit it or not. But this is the first time I've really felt like what I was doing went beyong just art plus experience plus discipline. This week, I feel like I moved closer toward that golden ideal of Software Engineering.