Don't Stop at DRY

I've been thinking a lot about the DRY principle lately. You know the one. DRY: Don't Repeat Yourself. It's a principle that was first made popular in the book "The Pragmatic Programmer". Unlike much of the rest of the content of the book, DRY is something that appears to have penetrated fairly deeply into the collective consciousness of the greater programming community. In a certain respect, this is a huge success on the part of Andy Hunt and David Thomas. To have disseminated so completely such a fundamental rule of programming is surely something to be proud of.

Note what I just said. DRY is "a fundamental rule of programming." I suspect very few people with appreciable experience would reasonably and honestly dispute this claim. But a rule is a funny thing, and a fundamental one doubly so. Rules are a first-pass at quality. They establish a minimum level of ensured quality by being a reasonable course of action in the vast majority of cases, at the expense of being a sub-optimal course of action in many cases, and even a plain bad option in some.

I am by no means saying that DRY is a bad thing. I'm not even saying not to do it. But I am saying that applying DRY is a beginner's first step toward good design. It's only the beginning of the road, not the end.

Eliminating repetition and redundancy is a very natural and obvious use for programming. It's especially hard to deny for so many people who slide into the profession by way of automating tedious and repetitious manual labor. It just makes sense. So when you tell a new programmer Don't Repeat Yourself, they embrace it whole-heartedly. Here is something that a) they believe in, b) they are good at, and c) will automatically achieve a better design. You'd better believe they are going to pick up that banner and wave it high and proud.

This is a good thing. If you can get your team to buy in to DRY, you know you'll never have to worry about updating the button ordering on that dialog box only to find that it hasn't changed in 3 scenarios because they're using a near identical copy in another namespace. You know that you'll never have to deal with the fallout of updating the logic that wraps up entered data for storage to the database and finding inconsistent records a week later because an alternate form had repeated the exact same logic.

What you might run into, however, is:
  1. A method for rendering currencies as user-friendly text which is also used to format values for XML serialization. The goal being to keep all text serialization centralized regardless whether it's meant for display or storage.
  2. Views, controllers, configuration, and services all using the same data objects, regardless of the dramatically different projections necessary for every operation. This in the interest of avoiding redundant structures.
  3. You might find that your views, controllers, and persistence layers all depend directly on a single tax calculation class. Of course here the goal is very simply to centralize the business logic, but actively works against establishing a proper layering in the application.
These are all very sub-optimal, or even bad, design decisions. They are all examples that I have seen with my own eyes of decisions made in the name of DRY. But DRY itself is not the cause. The problem is that the people responsible for these decisions have unknowingly pitted DRY against other quality design rules and principles.

DRY focuses on behavior. That is to say algorithms. "Algorithms", along with "data", are the fundamental building blocks of applications. They are the substance our work is made of. But trying to build complex software while thinking only or primarily at this elementary level is like trying to map a forest one tree at a time.

At some point, the programmer must graduate from the rule of DRY to the nuance of responsibility. The Single Responsibility Principle (SRP) is the cardinal member of the SOLID family. It's premier position is not simply a coincidence of the acronym. It's also the next most important rule to layer onto your understanding of good design.

Responsibility is about more than function. It's also about context. It's about form and purpose. Identifying and isolating responsibilities allows you to take your common DRY'ed functionality, and cast it through various filters and projections to make it naturally and frictionlessly useful in the different places where it is needed. In a very strict sense, you've repeated some functionality, but you've also specialized it and eliminated an impedance mismatch.

Properly establishing and delimiting responsibilities will allow you to:
  1. Take that currency rendering logic and recognize that readability and storage needs present dramatically different contexts with different needs.
  2. See that the problem solved by a data structure is rarely as simple as just bundling data together. It also includes exposing that data in a form convenient to usage, which may vary dramatically from site to site.
  3. Recognize that calculation is worth keeping DRY, but so is the responsibility to trigger such calculation. It can be both triggered and performed in the business layer, and the result projected along with contextual related business data to wherever it needs to be.
By layering responsibility resolution on top of the utilitarianism of DRY, you take a step back from the trees and can begin to manage a slightly wider view of the problem and solution spaces. This is the key and crucial lesson that all beginning programmers must learn, once they've mastered the art of DRY. Once again, DRY is fundamental and indispensable. It's one step along the path to wisdom and, in general, should not be skipped or omitted. But it's not an end in itself. So don't stop when you get DRY. Start looking for the next step. And consider responsibility resolution as a strong candidate.

Mindfulness

You are faced with an unfamiliar coding problem. You know what you're "supposed to do"--use this pattern, follow that convention--but you don't know why. And the guidance seems ever so slightly... off. The context is just a bit different. The constraints are a bit tighter here. A bit looser there. The peg doesn't quite fit in the hole, but everyone is telling you it should.

Everyone is telling you "this is how you do it" or "this is how we've always done it". You think you can follow the rule, but you don't think it will be pretty. What do you do?

I have witnessed many people in this situation make their choice. Most commonly they do whatever it takes to follow the rule. If the peg doesn't fit, they pound it till it's the right shape. If the cloth won't cover, they'll stretch, fold, and tear creatively until they can make a seam. Somewhat less often, they just toss out the advice and do it the way they are comfortable with. Often that just means "hacking out" an ad hoc solution.

In either case, the person learns nothing. They will repeat their struggle the next time they face a situation that is ever so slightly different. They will sweat, stress, and hack again. Over and over, until someone shows them a new rule to follow.

As students we are taught to learn first by rote, then by rule. If we are lucky, we are eventually tested on whether we have learned to think. And most commonly we manage to slog it out and do well enough to pass without actually being required to think. It is so very easy to become comfortable with this model of learning. A particular someone, vested with the responsibility of fertilizing our minds and nurturing the growth of understanding, knows the answers. We don't understand, but someone can at least tell us when we are right or wrong, and often that's enough.

We settle into dependence. And by settling, we establish roadblocks in our path before we even set foot on the road. By settling, we refuse to take ownership of our own knowledge. We put ourselves at the mercy of those who know more and are willing to share of their time and understanding to help us overcome obstacles of our own creation.

This situation is not inevitable. You can avoid dooming yourself to toil under it with a very simple determination. But make no mistake, this simple determination will require determination. When you face the prospect of doing something that you do not understand, stop, take note, and ask "Why?" Refuse to continue on with any course of action until you know why you are doing it.

I'm not talking about questioning authority here (though that's good too). I am advocating understanding. If you think there's any chance you may have to face a similar situation again, then as a professional developer it behooves you to understand what you're doing this time, and why. This prepares you firstly to defend your actions, and secondly to tackle similar but different problems in the future.

By recognizing the reasoning behind a particular prescribed course of action, when you encounter a similar situation in the future you will be able to identify the subset of the problem that is subject to the prescription. Seeing this allows you to conceptually distinguish that part of the problem from the rest. From this vantage point you can decide whether the remainder is just noise that has to be accommodated, or something more significant. You will be able to start to consider whether there is another layer or dimension to the problem which might be better served by a different or additional pattern. You will be able to think, intelligently, and intentionally, about the problem both as a whole, and in part.

Lack of mindfulness is the scourge of intellectual pursuits (and a great many other things in life). Whether in programming, in health, in investment, etc., it binds you to the service of rules and systems. It puts you at the mercy of those who have understanding, and under the thumb of those who own the systems. Benevolent or otherwise, do you really want your own success and satisfaction to come at the whim of someone else, for no other reason than that you couldn't be bothered to put in the effort to understand? Do you want to spend your career tromping the same grounds over and over again, never coming to any familiarity or understanding of the landscape?

Always ask, "Why?" Then take the time to understand. Always be deliberate and intentional about applying any solution. Don't just follow the directions of the crowd, or some authority. You're not a patient taking orders from your doctor. This is your domain. Own your knowledge. Your future self will thank you.

Take "Single Responsibility" to the Next Level

The Single Responsibility Principle (SRP) is a crucial tool in your toolbox for managing complexity. Bob Martin has a great essay on the Single Responsibility principle which expresses one of the biggest benefits that it can deliver to you. The SRP is predicated upon the unfortunate reality that changing code is the biggest source of bugs. By extension, the easiest way to avoid bugs is to ensure that whenever you have to make a change, it affects as little of the code as possible.

This is well known among experienced developers, but as Martin notes at the end of his essay, it's extremely difficult to get right. In my experience, very few devs take the principle as far as they should. Especially considering the fact that most of us were taught that Object-Oriented Design is all about bundling up code that works together, it can be easy to get lulled into confidence about what it takes to truly adhere to SRP.

Martin says, "conjoining responsibilities" comes naturally to us. Programmers are finely-honed pattern matching machines. It often comes far too easily. Most of us begin our careers weaving responsibilities and dependencies throughout our code, in the interest of solving a problem as quickly and efficiently as possible... with bonus points for being clever. Then we start learning the SOLID principles and start to adjust our coding habits. We take SRP to heart and break up responsibilities along reasonable lines, even if it means writing a bit more code, and synchronizing state that could theoretically be shared if all things were equal.

Then we stop.

We move our logging out of exception handlers and into service classes. But we leave severity conditions and entry formatting mixed in with the filestream management. We separate our business logic from our UI rendering and input handling. But we pile up unrelated domain operations in our controllers and presentation models. We shrink our class sizes from 1500 lines to 500 and claim victory.

This is not enough.

For one thing, responsibilities that often seem naturally elemental can usually be broken down yet further. The log example is a perfect one. File stream management is a responsibility complex enough to be given its own class. Text formatting is yet another. And severity handling is something that can be configured apart from the other two aspects. Oh, and don't forget that interfacing with an unmockable framework resource class such as System.IO.Filestream is a worthy role all its own. Each of these is a responsibility that can be wrapped up in a small class of just 100 or so lines, and exposed with a simple fine-grained interface of just a few methods. Compose these together and you have a logging service that's flexible, highly testable in all aspects, and most importantly, can evolve and be maintained independently along several orthogonal dimensions, without interfering with the other functionality. And on top of all this, you get the automatic benefit that it's much more friendly to dependency injection and unit testing.

The other important lesson to learn is that SRP doesn't just apply to classes but also to methods. It's almost never necessary for a method to be more than 30 or so lines long, on the outside. A method of such restricted size will inevitably have fewer arguments to worry about, for starters. And further, it almost inherently prevents spaghetti code. Purely by the act of breaking out small operations of a handful of steps apiece, and giving each a highly specific and expressive name, you can avoid the "flying V" of nested loops and conditionals. You can avoid long-lived context variables and status flags. You can avoid stretching tightly coupled cause and effect relationships across multiple screens-worth of code. And you'll likely find yet more places to naturally slice up a class into separate responsibilities.

All of these help to control sprawling complexity. This sounds unintuitive, because if you follow the rough rule of 30 lines per method, 200 lines per class, you'll almost certainly end up writing far more classes, and probably more code in general. But you will always know exactly where to find code related to any particular thing that may go wrong. And you can always be certain that the portion of the application that is affected by a change to a particular unit of functionality will be highly constrained by virtue of the reduced number of interactions and dependencies that any one unit needs to worry about.

Consider what you can do to take the next step with SRP. Don't be satisfied with the first-order effects of a naive application of the principle. Rededicate yourself, try it out, shrink your units, and see the benefits in your code.

A Taxonomy of Test Doubles

Many words have been written in the TDD community about the myriad ways of mocking in service of unit tests. After all this effort, there remains a great deal of confusion, ambiguity, in the understanding of many--maybe even most--developers who are using mocks.

No less than the likes of the eminently wise Martin Fowler has tackled the subject. Fowler's article is indispensible, and it in large part built the foundation of my own understanding of the topic. But it is quite long, and was originally written several years ago, when mocks were almost exclusively hand-rolled, or created with the record/replay idiom that was popular in mocking frameworks before lambdas and expressions were added to C# and VB.NET with Visual Studio 2008. Add to that the fact that the article was written in the context of a long-standing argument between two different philosophies of mocking.

Unfortunately these arguments continue on even today, as can be seen in the strongly-worded post that Karl Seguin wrote last week. Looking back now, with several more years of community experience and wisdom in unit testing and mocking behind us, we can bring a bit more perspective to the discussion than what was available at that time. But we won't throw away Fowler's post completely. Within his post, there are firm foundations we can build on, in the definitions of the different types of mocks that Fowler identified.

There are four primary types of test doubles. We'll start with the simplest, and move through in order of ascending complexity.

Dummies

A dummy is probably the most common type of test double. It is a "dumb" object that has no real behavior. Methods and setters may be called without any exception, but without any side-effect, and getters will return default values. Dummies are typically used as placeholders to fill an argument or property of a specific type that won't actually be used by the test subject during the test in question. While a "real" object wouldn't actually be used, an instance of a concrete type may have strings attached, such as dependencies of its own, that would make the test setup difficult or noisy.

Dummmies are most efficiently created using a mock framework. These frameworks will typically allow a mock to be created without actually configuring any of the members. Instead they will provide sensible defaults, should some innocuous behavior be necessary to satisfy the subject.

Stubs

A stub is a test double which serves up "indirect input" to the test subject. An indirect input is information that is not provided to an object by the caller of its methods or properties, but rather in response to a method call or property access by the subject itself, to one of its dependencies. An example of this would be the result of a factory creation method. Factories are a type of dependency that is quite commonly replaced by a stub. Their whole purpose is to serve up indirect input, toward the goal of avoiding having to provide the product directly when it may not be available at the time.

Stubs tend to be quite easy to set up even with more primitive mocking frameworks. Typically, all that is needed is to specify ahead of time the value that should be returned in response to a particular call. The usual simplicity of stubs should not be taken as false comfort that the doubles are not too complicated, however. Stubs can get quite complex if they need to yield a variety of different objects multiple calls. The setup for this kind of scenario can get messy quick, and that should be taken as a sign to move on to a more complex type of double.

Mocks

A mock is a type of test double that is designed to accept and verify "indirect output" from the subject class. An indirect output is a piece of information that is provided by the test subject to one of its dependencies, rather than as a return value to the caller. For example, a class that calls Console.WriteLine with a message for printing to the screen is providing an indirect output to that method.

The term "mock" for a particular type of test double is in a certain way unfortunate. In the beginning there was no differentiation. All doubles were mocks. And all the frameworks that facilitated easy double creation were called mocking frameworks. The reason that "mock" has stuck as a particular type of double is because in those beginning times, most test doubles tended to take a form close to what we today still call a "mock". Mocks were used primarily to specify an expectation of a particular series of method calls and property access.

These "behavioral mocks", or "classical mocks" as Fowler calls them, gave birth to the record/replay idiom for mock configuration that reached its peak in the days of RhinoMocks. And due to the tendency of inexperienced developers to create complicated object interactions and temporal coupling, mocks continue to be a very popular and common form of test double. Mocking frameworks make it far easier to unit test classes that rely on these types of coupling. This has led many to call for the abolishment of mocks and mocking frameworks in a general sense, claiming that they provide a crutch that makes it too easy to leave bad code in place. I'm sympathetic to the sentiment, but I think that this is throwing the baby out with the bathwater.

Fakes

Fakes are the most complicated style of test double. A fake is an object that acts simultaneously as both a stub and a mock, providing bidirectional interaction with the test subject. Often fakes are used to provide a substantial portion of the dependency's interface, or even all of it. This can be quite useful in the case of a database dependency, for example, or a disk storage service. Properly testing an object that makes use of storage or persistence mechanisms often requires testing a full cycle of behavior which includes both pushing to and pulling from the storage. An in-memory fake implementation is often a very effective way of avoiding relying on such stateful storage in your tests.

Given their usefulness, fakes are also probably the most misused type of test double. I say this because many people create fakes using a mocking framework, thinking they are creating simple mocks. Or worse, they knowingly implement a full-fledged fake using closures around the test's local variables. Unfortunately, due to the verbosity of mocking APIs in static languages, this can very easily become longer and more complex code than an explicit test-specific implementation of the interface/base class would be. Working with very noisy, complicated, and fragile test setup is dangerous, because it's too easy to lose track of what is going on and end up with false-passes. When your test's "arrange" step starts to overshadow the "act" and the "assert" steps, it's time to consider writing a "hand-rolled fake". Hand-rolled fakes not only remove brittle and probably redundant setup from your tests, but they also often can be very effectively reused throughout all the tests for a given class, or even multiple classes.

It's not Just Academic

These are the primary categories into which nearly all, if not all, test doubles can be grouped. Fowler did a great job of identifying the categories, but I think this crucial information is buried within a lot of context-setting and illustration that doesn't necessarily offer great value today. Mocking is ubiquitous among the subset of developers that are doing unit testing. But too many people go about unit testing in an ad hoc fashion, rather than deliberately with a plan and a system for making sense of things. I believe that a simple explanation of the major types and usages of test doubles, as I've tried to provide here, can aid greatly in bringing consistency and clarity of intent to developers' unit tests. At the very least, I hope it can instill some confidence that, with a little discipline, pattern and reason can be found in the often messy and overwhelming world of unit testing.

YAGNI Abuse

Have you ever proposed a code change or a course of action in a project for the purpose of improving the stability and maintainability of the code base, only to have someone dispute the need on the basis of

YAGNI

? I was flummoxed the first time this happened to me. Since then I've learned that it's not at all rare, and in fact may even be common.

The YAGNI principle is a wonderful thing. Used properly, it can have a huge beneficial impact on your productivity, your schedule, and on the maintainability of your product. But like so many other important ideas in the history of software development, YAGNI has become a poorly understood and misused victim of its fame. Through constant abuse it has become difficult to communicate the sentiment that it was intended for without a thorough explanation. And I can't count the number of times I've heard YAGNI cited in a completely incorrect or even dangerous way.

The term "YAGNI" has fallen prey to a similar disease as "agile". People often invoke it as an excuse not to do something that they don't want to do. Unfortunately, this quite often includes things that they

should

do. Things that have long constituted good design, and good software engineering practice. A few examples of things that I have personally been horrified to hear disputed on YAGNI grounds include:

These are all activities that are strongly valued and diligently practiced in the most productive, successful, and small-A-agile software development organizations and communities. For myself and many of you out there, it's patently obvious that this is a subversion and abuse of the YAGNI strategy. Your first instinct in response to this kind of misuse is to say, with no little conviction, "that's

not

what YAGNI means."

This, of course, will not convince anyone who has actually attempted to use the YAGNI defense to avoid good engineering practices. But to refute them, you needn't rely solely on the forceful recitation of the principle as they do. Fortunately for us, YAGNI is not an elemental, indivisible tenet of software engineering. It did not spring fully-formed from Ron Jeffries' head. Rather it is based in experience, observation, and analysis.

It is clear from reading

Jeffries' post

on the XP page that the key to sensible use of the YAGNI principle is remembering that it tells you not to add something that you think you might or probably will need, or even certainly will need, in the future. YAGNI is a response to the urge to add complexity that is not bringing you closer to the immediate goal. Particularly common instances of true YAGNI center on features that haven't been identified as either crucial or wanted, such as configurability, alternative logging mechanisms, or remote notifications.

Looking at my original list, it is clear that none of these things truly add complexity in this way. A very naive metric of complexity such as "number of code entities" may seem to indicate the opposite. But these are actually all established and reliable methods for

controlling

complexity. What these techniques all have in common is that they restrict the ways in which parts of your program are allowed to interact with other parts of your program. The interaction graph is the most dangerous place for complexity to manifest in a program because it compounds the difficulty of changing any one part of the application without affecting the rest of it like a row of dominoes. The practices I identified above, which are so often refuted as "adding complexity", are some of the many ways to guide your application toward this:

And away from this:

There is a multitude practices, patterns, and design principles that help keep your modules small, their scopes limited, and their boundaries well-defined. YAGNI is one of them, but not the only one. Claiming YAGNI to avoid this kind of work is

"not even wrong"

. Not only are you

gonna

need it, but you

do

need it, right from day one. Working without these tools is seeding the ground of your project with the thorns and weeds of complexity. They provide you with a way to keep your code garden weed-free. In this way they are kin to YAGNI, not its enemy. Claiming otherwise reveals either a disrespect for, or a lack of understanding of, the benefits of good design and engineering practices in a general sense. So next time someone sets up this contradiction in front of you, don't let them get away with it. Show your knowledge, and stand up for quality and craft.

Retrospective on a Week of Test-First Development

Any programmer who is patient enough to listen has heard me evangelizing the virtues of Test-Driven Design. That is, designing your application, your classes, your interface, for testability. Designing for testability unsurprisingly yields code which can very easily have tests hung onto it. But going beyond that, it drives your code to a better overall design. Put simply, this is because testing places the very same demands on your code as does incremental change.

You likely already have an opinion on whether that is correct or not. In which case, I'm either preaching to the choir, or to a brick wall. I'll let you decide which echo chamber you'd rather be in, but if you don't mind hanging out in the pro-testability room for a while, then read on.

Last week I began a new job. I joined a software development lab that follows an agile process, and places an emphasis on testability and continuous improvement. The lead architect on our development team has encouraged everyone to develop ideally in a test-first manner, but I'm not sure how many have taken him up on that challenge. I've always wondered how well it actually works in practice, and honestly, I've always been a bit skeptical of the benefits. So I decided this big change of environment was the perfect opportunity to give it a shot.

After a week of test-first development, here are the most significant observations:
  1. Progress feels slower.
  2. My classes have turned out smaller, and there are more of them.
  3. My interfaces and public class surfaces are much simpler and more straightforward.
  4. My test have turned out shorter and simpler, and there are more of them.
  5. I spent a measurable amount of time debugging my tests, but a negligible amount of time debugging the subject classes.
  6. I've never been so confident before that everything works as it is supposed to.

Let's break these out and look at them in detail.

1. Progress feels slower.

This is the thing I worried most about. Writing tests has always been an exercise in patience, in the past. Writing a test after writing the subject means sitting there and trying to think about all the ways that what you just wrote could break, and then writing tests for all of them. Each tests include varying amounts of setup, and dependency mocking. And mocking can be tough, even when your classes are designed with isolation in mind.

The reality this week is that yes, from day to day, hour to hour, I am writing less application code. But I am re-writing code less. I am fixing code less. I am redesigning code less. While I'm writing less code, it feels like each line that I do write is more impactful and more resilient. This leads very well into...

2. & 3. My classes have turned out smaller, and there are more of them.
My interfaces and public class surfaces are much simpler and more straightforward.

The next-biggest worry I had was that in service of testability, my classes would become anemic or insipid. I thought there was a chance that my classes would end up so puny and of so little presence and substance that it would actually become an impediment to understandability and evolution.

This seems reasonable, right? Spread your functionality too thin and it might just evaporate like a puddle in dry heat. Sprinkle your functionality across too many classes and it will become impossible to find the functionality you want.

In fact the classes didn't lose their presence. Rather I would say that their identities came into sharp and unmistakable focus. The clarity and simplicity of their public members and interfaces made it virtually impossible to misuse them, or to mistake whether their innards do what they claim to. This enhances the value and impact of the code that consumes it. Furthermore it makes test coverage remarkably achievable, which is something I always struggled with when working test-after. On that note...

4. My tests have turned out simpler, and there are more of them.

The simple surface areas and limited responsibilities of each class significantly impacted the nature of the tests that I am writing, compared to my test-after work. Whereas I used to spend many-fold more time "arranging" than "acting" and "asserting", the proportion of effort this step takes has dropped dramatically. Setting up and injecting mocks is still a non-trivial part of the job. But now this tends to require a lot less fiddling with arguments and callbacks. Of course an extra benefit of this is that the test are more readable, which means their intent is more readily apparent. And that is a crucial aspect of effective testing.

5. I spent a measurable amount of time debugging my tests, but a negligible amount of time debugging the subject classes

There's not too much to say here. It's pretty straightforward. The total amount of time I spent in the debugger and doing manual testing was greatly reduced. Most of my debugging was of the arrangement portions of tests. And most of that ended up being due to my own confusion about bits of the mocking API.

6. I've never been so confident before that everything works as it is supposed to.

This cannot be overstated. I've always been fairly confident in my ability to solve problems. But I've always had terrible anxiety when it came to backing up correctness in the face of bugs. I tend to be a big-picture thinker when it comes to development. I outline general structure, but before ironing out all the details of a given portion of the code, I'll move on to the interesting work of outlining other general structure.

Test-first development doesn't let me get away with putting off the details until there's nothing "fun" left. If I'm allowed to do that then by the time I come back to them I've usually forgotten what the details need to be. This has historically been a pretty big source of bugs for me. Far from the only source, but a significant one. Test-driven design keeps my whims in check, by ensuring that the details are right before moving on.

An Unexpected Development

The upshot of all this is that despite the fact that some of the things I feared ended up being partially true, the net impact was actually the opposite of what I was afraid it would be. In my first week of test-first development, my code has made a shift toward simpler, more modular, more replaceable, and more provably correct code. And I see no reason why these results shouldn't be repeatable, with some diligence and a bit of forethought in applying the philosophy to what might seem an incompatible problem space.

The most significant observation I made is that working like this feels different from the other work processes I've followed. It feels more deliberate, more pragmatic. It feels more like craft and less like hacking. It feels more like engineering. Software development will always have a strong art component. Most applied sciences do, whether people like to admit it or not. But this is the first time I've really felt like what I was doing went beyong just art plus experience plus discipline. This week, I feel like I moved closer toward that golden ideal of Software Engineering.

Strategy vs. Tactics in Coding Standards

I'm starting a new job on March 7. As I wrapped up my time with the company where I've spent the past 3 years, I spent some time thinking about the decisions and responsibilities that I was given, and that I took on, while there. One of the reasons I was hired was to bring to the development team my more formal education and professional experience as a software developer. And one of the responsibilities I was allowed was to lead regular discussions with the other developers. The goal of these discussions was to help our team to become more professional in the way we developed software. Of course it was only a matter of time before someone on the team brought up coding standards.

I've had enough experience with coding standards to have developed some skepticism to the way that they are typically applied. It sounds good, in theory. Bring some consistency, order, and predictability to the source code produced by the group. But in my experience, the practice rarely delivers on the promise. It's easy to start a fight on this topic, of course. It tends to be a "religious" topic in that people have strong and entrenched opinions, and are rarely shaken from them regardless of argument. This is unfortunate, because I think there is a way to make them work. But it requires thinking about coding standards in a different kind of way.

The challenge of establishing standards, and the reason standards tend to be so fractious, is that "what works" must necessarily vary depending on the case. But very rarely does an organization who establishes coding standards allow for much variance in response to new problems or process friction. To figure out why this is, and how we can deliver on some of the promise of standards without resorting to brainwashing, let's take a closer look at the types coding standards we see in the wild.

In my experience, coding standards tend to fall into one of two broad categories. There are strategic coding standards, which are general and outcome-oriented. And there are tactical coding standards, which are specific and mechanics-oriented.

I'll address the latter first, as these are the kinds of standards that tend to start religious wars. Here are some examples of tactical coding standards:
  • All method headers must include a description, pre-conditions, side-effects, argument descriptions, and return value description.
  • Use C#'s "var" keyword only for variables of anonymous data types.
  • Place each method argument on its own line.

At this point you're probably at least nodding your head in recognition. Maybe some of you are nodding in satisfied approval, while others are holding back an outburst of disgust. When most people hear the words "coding standards", the tactical ones are the ones they think of. And these often evoke strong emotional responses. Their most common adherents tend to be managers or project leads, and often regular developers who've been around the block a few times.

The reason people want tactical coding standards is simple. They bring a modicum of consistency to a wild and woolly world. You often can't trust that the guy writing code in the next cube can code his way out of a paper bag. So when you have to take over his code after he's gone it's nice to know that, at the very least, he followed the standards. If all else fails, there is one thing that will provide sure footing as you plumb the dangerous depths of their code.

I understand this, and I sympathize. But at the same time, I chafe under these standards. They feel like micromanagement to me. It makes me feel like my coworkers don't trust me to do the thing that I was hired to do. There are organizations where this can't be taken for granted. But my own personal response to those environments is to leave them. I want to work with people I can trust at this level, and that can trust me the same.

It's not all bad for tactical standards, and we'll circle back in a bit. But for now let's look at some examples of strategic coding standards.
  • Write informative class headers.
  • Endeavor to keep methods to less than 25 lines in length.
  • Use whitespace and indenting to identify related code.

Strategic coding standards are broad. They do not tell the developer exactly what to write. Rather they set an expectation of the kind of code that should be written, and leave it to the developer as to how exactly that kind of code should be created. In my experience, these are the kinds of standards that are okay to mandate and enforce from a management perspective. Evoke the general feel of the code you want your team to produce, and then let them use their skill, experience, and professionalism to decide how to make it happen.

You can probably tell that in general I feel a lot better about this type of standard than the other. For one, they stay out of your way on a minute-to-minute basis. They're also far easier to get people to agree upon. Rather than being a cynical defensive measure against bad code, they empower the developer while fostering good coding habits.

Now having said all that, I do think there is a place for tactical standards. One big reason that I named these categories as I did is because I think that the role of establishing each type of standard can map to the organization similarly to how the roles map in the military, from which these terms are taken. Strategic standards can be mandated by leadership without stifling the developers. While the particular tactics can be determined by the people "on the ground".

There are many cases where tactical coding standards are beneficial. But it almost always serves the organization better to let the developers establish and enforce them. Take a team with multiple people all sharing equally in ownership of a codebase, each working from day to day on code written by the others. Imagine further that the code is sensitive in some way that can't abide regression bugs or the like. Maybe there is also a huge amount of code, or a large number of coders.

In these types of situations, it's crucial that the developers be able to pick up a strange piece of code and get a quick picture of what's going on. And to be able to modify the code without introducing a foreign-looking island of code to trip up the next guy. To do this, the developers have to be comfortable with the low-level style aspects of the code, both to read and to write. The best hope a team has of developing this comfort is to, as a self-directed group, come to agreement on common-denominator style standards over time and through shared experience.

There is a balance to be struck here. We want to ensure quality output from our teams, but we don't want to turn our developers into code-generators that don't take responsibility for their own work product. I think this balance is best struck by the proper application of each type of coding standard, in the proper context. If leaders can establish strategic standards that express high-level goals for clean code and understandable designs, then they can empower their teams to find solutions to the lower-granularity challenges such as understanding each other, and reducing the friction involved in cooperation on a shared codebase. This is strategy and tactics in practice, and at its best.

Anatomy of a Windows Service - Part 4

Deciding how to trigger the work of the service usually comes down to some sort of threading solution. And threading in Windows services is, as everywhere else, a thorny issue. The most basic need we have is to be able to respond to orders from the Service Control Manager (SCM) without blocking processing at an inconvenient spot. Beyond that, we may want to have a bit of work kicking off periodically, with a short period, or we may want to respond to some environmental event.

Responding to environmental events is fairly easy when the framework provides a callback mechanism for it, such as the FileSystemWatcher. But if there's no such facility, then you're stuck with polling, which is in the same boat as the guy who needs to do work on a quick periodic schedule. This leads us to worker threads, and all the headaches that go along with that.

I have no intention of getting deep into threading concepts and strategies here. Not least because I am woefully uninformed and inexperienced with them. If you need to be highly responsive, highly predictable, or highly parallel, that's an entirely new problem space. So it will have to suffice to say here that you should not just hack something together quickly. Do your research and do it right.

What I have done in the past is to use a Thread.Timer to fire a callback on my desired period. But I use a flag to make sure multiple work units do not process in parallel, and I don't worry about missing a period by a nanosecond simply because the work completed just after I checked the flag. Instead I just wait for the next one. This limits the responsiveness of my service, but it's simple and reliable because it sidesteps the issues that arise out of mutually writeable shared state and truly parallel work.

For our sample project here, I introduced this mechanism by creating a worker class that manages the timer. Let's take a look at that.


Note the flag properties, which are used as a guard on the timer event to make sure that processing doesn't re-enter before the previous iteration completes. The timer interval is set to start things rolling, and cleared to stop them. The added complexity in the Stop method is crucial to understand as well. Timers are tricky to get rid of without blocking the thread. We use a wait handle to do it. Wait handles are worth reading up on if you deal with threading at all. Another nice feature of this worker class is that an object of this type can be reused: Start can be called to spin things up again after Stop has finished.

The worker presents essentially the same interface as the service itself, which may seem a bit redundant. All I am really trying to achieve by this is to respect the separate responsibilities of management of service as a whole, as opposed to management of the work timer and triggers. The service itself is just processing events from the SCM. It need not know, when it tells the worker to start, whether the worker is firing up a new thread to get started, waiting for the moment the current one finishes, or just taking a pass on this period.

The service class will now delegate to the worker. This is very straightforward. I just had to replace the IGreeter property with a property for the worker. Now in OnStart, we spin up the worker, and in OnStop we finally have some code as well, to spin down the worker.


You can find the source code for our service project at this point tagged in the repo for this post on GitHub.

Now that we've got the service doing everything it's supposed to do, we may decide that we'd like to be able to change exactly how it does it's job. Essentially, we want to make it configurable. The only real parameters of our service at this point are the work interval and the file name, and those are hard-coded. We could easily make the service more flexible by taking these hardcoded values and making them configurable.

One way to do that would be to use the service argument array, which operates just like the regular app argument array. We don't have much to configure here, so that would probably work for us. But to imitate a real-world scenario where there may be lots to configure, let's pretend that won't be sufficient.

Another option would be to use a config file. Services, being normal .NET executables, can have app.config files just like a command line or Windows app does. I'm personally not fond of the default .NET config mechanism though, so I think I'll roll my own, because that can be quick and easy too, especially when our needs are as simple as they are here. Let's throw together a little XML format for our two settings.


We can load this pretty easily without relying on the magic of the .NET configuration mechanism, by creating a serializable XML data object. We can put it behind a general settings interface, and have a loader class to populate it. The interface will be enough to shield the rest of the app from the persistence details.

Here is the data class, and the settings interface.


The loader class is nearly as simple. Standard XML deserialization.


We can ensure that the code that uses these classes is kept simple, by registering these properly with the IoC container.


This registration will ensure that any IConfigSettings-typed dependency within a given Autofac lifetime context will receive the same ConfigSettings instance. The first resolution of that ConfigSettings instance will be created using the ConfigLoader service.

After identifying what we want to be configurable, we can go about defining some settings providers. I'll remind you that the value of settings providers is that they allow you to define dependencies on a given configuration value, while also decoupling the dependent class from the other unrelated settings that happen to cohabit the configuration source.

Below the work interval settings provider. I'll spare you the very similar code found in the file name settings provider. The IoC registration is likewise straightforward.


With the service now configurable, we can perform one final trick. When a service stops, the process may not necessarily stop. So we need some way to establish when the configuration is reloaded. The natural place for this to happen is when the OnStart event is triggered. Since the config file is loaded when a new IConfigSettings dependency is resolved for the first time in a context, we just need a new context to be spun up with each call to OnStart.

We can make this happen by taking advantage of Autofac's relationship types again. A Func<T> dependency will re-resolve each time the function is executed. And an Owned<T> dependency will cause a new lifetime scope to be created each time the dependency is resolved. So all we need to do is combine the two.

Here is the new service class, with the Func<Owned<T>> dependency.


When OnStart is called, assuming the service isn't already running, we create a new worker, from the delegate dependency. With this new worker comes a new scope, so it's dependency on IConfigSettings will be resolved with a new object, and the config file will be loaded on the spot. Conversely, OnStop allows the worker to go out of scope, resulting in the lifetime scope being cleaned up.

With that, we have finally completed our Hello World Windows service. The guts are all there, and it's cleanly designed around interfaces for maximum testability should we desire to have a unit test sweet. And finally, we use Autofac to conveniently manage a relevant object lifetime scope, making sure our settings are loaded and unloaded at the appropriate time. The actual core behavior of the service itself can be as simple or as complex as it needs to be, while remaining easily accessible and manageable via the worker object and it's lifetime scope.

The full source for the service in its final incarnation can, like all the waypoints, be found in this post's GitHub repo. Thanks for bearing with me through this month of Windows service architecture! I'll see if I can't find something new and different to tackle for next week.

Anatomy of a Windows Service - Part 3

After going through the remaining work I had planned for our service specimen, I realized that the resulting post would be very long, and so I have decided to extend the series by one more post. This week we give the app a job, and add logging.

We've spent enough time working on this service so far that we should probably give it a job at this point.  You may have guessed from the name by this point that I've been intending to make this a "hello world" for services. The types of things that services typically do are either to respond to system events or network communications, or to process units of work as they become available. A nice easy job we can give to our service is to touch a file on a very short interval--an interval short enough that a scheduled task isn't appropriate. Let's say 5 seconds.

We'll worry about handling the interval later. For now, let's just add the behavior so that it happens once, when the service starts.

First we create a straightforward little task-oriented class and interface. This should require no explaining. And of course we'll register this with the IoC container as well. You should be familiar with that code by now, so from here on out, I won't mention the IoC container unless something out of the ordinary is required for the registration or resolution.


With this done, we add a dependency to the interface in our GreetService constructor, and then a call to the task in OnStart.


This will cause the C:\Hello\World.txt file to be created when the service starts. The service will then be running, according to the Service Control Manager (SCM), and can be stopped. But while running it will do nothing more. If you want to see the full sample project source at this point of progress, I have tagged it in the GitHub repo.

This is a good time to add some logging. If we have problems later on, logging will help us to troubleshoot what's going on.  Right now, if the service were to fail somehow, we wouldn't know why. If AutoLog were set to true in the service contructor, then the service would log to the Application event log when it crashed, but not with much useful information. But we don't want that anyway, if our log is an important one. Rather it should have it's own log, so we can always be certain we're looking at the right info and we don't have to sift through hundreds of other entries to find it.

Adding a custom event log is actually quite simple. There are two parts to the task. One is to make sure it gets created when the service is installed. The other is to use it. Creating an event log on install is trivial, mostly because it's already being done, and we can just hook our own data onto that. That's right, the ServiceInstaller class by default contains an EventLogInstaller that is configured to install an event source for our service in the Application event log. All we need to do is override that with our own log and source name.

Since both the installer and the logger class we'll write will need to know those two pieces of text, we'll create a settings provider for them. This will allow us to control the values for both usages in a single place.


Now that we've got a source from which to obtain the proper names, we can add a dependency to our service installer, and use it to override the default event log settings. All we need to do is pull the default EventLogInstaller out of the ServiceInstaller.Installers collection, and change the properties.


When we go to install the service using InstallUtil we will see it output that it is creating a new event source and event log. And if you go to the Windows Event Viewer in the Administrative Tools, you'll find a log called GreeterService.

Of course in order for anything to show up in there, we'll need to log some things to it. The safest way to do that is to throw in a logger object that can try to log to the event logs, but which won't crash out if it can't.  The last thing we want is for the service to crash due to a log issue if the work itself is humming along. (Unless, of course, we need an audit trail or something, but that's a different story.) Note the explicit exception suppression in the logger class below for this purpose.


This logger could be enhanced in a few ways, for more serious services. One good thing to do is to support different "levels" of logging. I have four standard levels that I use. In increasingly voluminous order they are: fatal errors, exception cases, progress, and verbose. An enum parameter added to the logging methods would allow the logger to check to see if it's in a mode that should allow an entry for the specified level. A step further, we might replace the String parameters with Func callbacks. The callers can place the actual work of assembling the message into these callbacks, and the logger would only call them if it had determined that the logging would be performed. By only doing the string manipulation if it is absolutely necessary this keeps performance up and avoids unnecessary failure points.

With the logger object available it would be a good idea to add some logging when the service is starting and stopping. Here are the updated OnStart and OnStop methods from our GreetService class:


Now you can install the service using InstallUtil.exe, and when you start and stop via the SCM, you'll see entries show up in the GreeterService event log.

We'll stop here for now, as this post is getting long and there is much more to do. You can see the full service code to this point so far where I have tagged it in the GitHub repo. Next time will be our final (I promise) post in this series. We'll bring it home by figuring out the best way to make sure the service does its work when we want it to, and then making it easily configurable, and reconfigurable.

Satisfying A Pair of Same-Typed Dependencies with Autofac

We'll take a break from the series on Windows services this week and cover some more Autofac / IoC / DI material. I hope to finish the services series with the final part next week.

Back in December I wrote a rather long post about the different Autofac relationship types. I covered what I considered to be the big ones, but there were a few that I didn't cover: IIndexed, and Keyed. I had an objection to these types that prevented me from examining them any further.

My Initial Objection

It's pretty simple. I don't want to have to reference the the IoC container framework in my classes. Accepted wisdom on IoC container usage is that you shouldn't reference the container, and I viewed this as an extension of that. Indeed I prefer to avoid referencing such infrastructure frameworks in core code wherever possible.

I feel the same way about the BCL's TypeConverterAttribute, and the XML serialization attributes, for example. It's an insidious form of coupling, and I really prefer to avoid having these little signposts sprinkled throughout the code of what framework I'm working with. Especially if they represent things that I can't sub out for testing, as is usually the case with the BCL.

In addition to the framework reference objection, I felt that this was just not a responsibility that belonged to the subject class. Using special parameter types or the like to denote such information was "cheating", akin to service location, and undermined the principle of Inversion of Control.

The purist in me had dug in on two separate fronts, and was ready for a fight. But recently I had an experience that taught me the value of these relationship types: I found I needed to inject two different implementations of the same service in the same place. I initially shied away from these relationship types. Instead, I decided I could solve the problem with a clever registration/configuration. After all, deciding who gets what implementations is what that's all about, though typically in a more generalized sense.

The Problem

My situation was that I had needed two different runtime instances of an image cache. One for full-size images, and one for thumbnails. And most commonly I needed to reference both of them in the same scope. If the two use cases existed in different contexts, I could possible use Autofac's nested container support to identify each one appropriately when needed. But in my scenario, I actually had a couple of classes that needed to make use of both caches. I gave the constructor for that class two parameters for injection, both having a datatype of the cache interface type. Then I went to work setting up the registration.

Without a major redesign, the solution has to involve attaching some metadata to each of two separate instances registered with the container. The container needs some way to differentiate between the instances. Since Autofac provides it, there's really no reason at all not to use the keying facility for registration. Given an interface ICache and an implementor Cache, the registration would look as below.


Now let's introduce another class that needs to use both of these caches.


Again, my instinct is to say that the class shouldn't have to say anything else. It's the container's job to figure out how to fill those dependencies. But what information does the ImageBrowser class expose that the container registration could use to determine which argument gets which instance of ICache? Well right now, there's not much there. Pretty much just the argument names. Can we use those to assign the ICache instances? Yes, but it ain't pretty.

OnPreparing1

The way to access the parameter names for resolution is to hook the OnPreparing event, as seen here:


Our implementation of OnPreparingImageBrowser then looks like this:


This looks harmless enough, if a bit verbose. But is a problems. We have encoded parameter names in strings in our registration code. If we ever have to change those parameter names, or add more dependencies to the constructor, this registration code for ImageBrowser will be broken, but it will still compile. So unless you have some pretty magnificent tests for your IoC registration (which it seems is not easy to do), this is a regression bug waiting to happen. Furthermore, this code is very likely useless even if we have another class with these same dependencies. If the constructor arguments are named differently, we'll need another delegate for that set of constructor arguments.

So obviously argument names aren't the ideal way for the subject to communicate information about multiple dependencies of the same service type. And even using the argument names undermines my objection in spirit. It just does it implicitly, rather than explicitly. So let's just accept that we've already taken the first step and go the rest of the way. How can we communicate the necessary information in a sustainable way, that doesn't require us to go mucking around in the registration code if we shuffle things around a bit in ImageBrowser?

Three options spring to mind. One would be to use an attribute applied to the arguments of the constructor. But there's no native support for recognizing these in Autofac, so we'd have to write our own registration and resolution conventions. While it would be more resistant to change than the named parameter mechanism, it would require even more code on the registration side of things, to set up a convention-based registration that keys on the attributes. Not to mention having to define the attributes. This actually sounds like a fun exploration for another day. But today, we just want to get this working with a small amount of intuitive code.

Autofac has two mechanisms which we can use to reduce the amount of code we need here: IIndexed and Meta.

IIndexed<TKey, TComponent>2

Using Autofac's IIndexed mechanism is a simple change from where we are now. It involves a small change to the constructor of ImageBrowser, and removing code from the bootstrapper. That's right, a net reduction in code, without losing functionality. And there's nothing better than a good codectomy. So grab the scalpel and sculpt our new bootstrapper.


Note the removal of the OnPreparingImageBrowser method, and the reference to it in the ImageBrowser registration. That's the only change here. It's completely unnecessary code. The cache registries don't need to change because the IIndexed mechanism hooks right into the keyed registration just like our old strategy did.

The ImageBrowser constructor needs to change a bit to trigger resolution using the keyed registration. Here's the updated class:


The only change here is that we consolidated the separate cache arguments into one dictionary-style argument.

Meta<TComponent>3

Autofac's Meta mechanism works a little differently in that it allows you to attach data to a registration, and then to express a dependency on that metadata, in the subject class. The data takes the form of name/value pairs. If we wanted to use this, it would change our registration as follows:


We've replaced the call to Keyed with a call to WithMetadata. We also are essentially passing a name-value pair. There's also the option to attach an actual custom metadata object, and use the property names and values as the name/value pairs. Since we only need the one value, and it's not going to be changing at runtime, we can just directly assign the name and value easily.

Once again our constructor will need to change, of course. Here's the version that uses the metadata:


Here we're using the IEnumerable relationship type to get all the metadata objects, and then explicitly picking out the ones with the two with the values we're looking for. The CacheTypeChecker method is just a little comparison delegate generator that saves a bit of repeated code in the constructor.

The Lesser of Two Evils

Now this isn't necessarily the problem that Meta or IIndexed are meant to solve. It's really more meant for cases where the needed component is going to depend on runtime conditions. In that case, a simple service finder/locator is sometimes exactly what you want. There are often other ways to get the job done even then however, for example using lifetime contexts and contextual state objects to give the container all the information it needs.

In our case though, the dependency that's actually being injected bothers me a bit. It's sort of a limited service locator. The problem with that is that the specific dependencies aren't explicit. ImageBrowser is basically saying, "Er, I can't tell you exactly what I want, but if you give me everything like this, I'll pick out the ones I want". That is definitely not a pure inversion of control in this case, because we need exactly 2 instances of a very particular type.

We're basically limited to choosing the lesser of two evils. Do we couple to constructor argument names, or take a dependency on a service locator? I'm not sure I feel more strongly about either one. But the IIndexed option is definitely the least code, so that's the one I'll go with. And you can bet that if I run across a situation where the decision of which instance I need is made at runtime, I'll look to these solutions first.

Somewhere down the line I'll investigate that attribute convention option I mentioned in passing. I tend to dislike attributes, but if it's our code (rather than framework code) that does both the defining and the consuming... then maybe I can live with it.



GitHub Source Code References
  1. Keyed registration with OnPreparing resolution
  2. Keyed registration with IIndexed<TKey, TComponent> dependency
  3. Metadata registration with Meta<TComponent> dependency


Anatomy of a Windows Service - Part 2


This post contains gist code samples. My experiments indicate that these do not display in Google Reader, so if you are reading this there, you may want to click through to the post.

Last time, we dissected a Windows service template project to get a handle on the different moving pieces.  We identified an installer class, as well as two run-time configured installer objects, a service class, and the framework injection point for spinning up a service.

The template is a very simple project, and it gets the job done.  There are some deficiencies though.  From an architectural standpoint, the design doesn't lend itself well to testability, or composability.  It also doesn't provide any good place to bootstrap an IoC container for the installer portion, which would be beneficial to have if we want to have any custom behavior happening at install.  From a feature standpoint, there's no logging, and no worker thread to separate the service control events from the actual behavior of the app.

Let's start out by improving the composability, which will put us on a good footing for testability as well.  The biggest obstacle to that in the template project was all of the direct instantiation and configuration of objects.  For example, in the template project, the ProjectInstaller class's constructor, which is an entry point that is triggered directly by msiexec or installutil, was responsible for creating and initializing the sub-installers.  In our application, we'll defer to an IoC container, rather than construct these objects directly.

Let's add a reference to my personal favorite IoC container, Autofac, and then create a bootstrapper that will do the registration necessary for the installer portion of the app. We'll leave it empty to start, because we haven't established what we'll need from it yet.


Now lets create the ProjectInstaller.  The constructor, since it's an application entry point, will have a direct reference to the IoC container.  We'll leave out the needless code we identified last time, but don't forget the crucial RunInstaller attribute.  The constructor is left responsible for setting up the Installers collection.  If we assume that we'll be registering all our installers with the IoC container, then Autofac makes this easy for us with the IEnumerable relationship type, which will resolve all implementors of the specified service that are available at the resolution location.  Let's put all that together and see what we get.


Compared to the template project, we've eliminated some explicit object instantiation, but we've also lost some explicit configuration.  That has to be exported to somewhere else.  There are a couple of options for re-creating this functionality with our IoC container in place.  One is to give these exact configuration responsibilities, i.e. the setting of properties after construction, to the IoC container.  This is not a strong-point of most containers--even Autofac.  It can be done, but it's not really straightforward.  It results in complex registration code that doesn't communicate well what's going on.

Alternatively, we could move these responsibilities into factories.  I've used this strategy before, and it can be advantageous if there are classes that will have dependencies on the factories themselves.  But in this case, the only thing that would need to know about the factories is the IoC.  So instead, I'd recommend deriving special-purpose sub-classes of the desired installer base classes, and putting the configuration in those constructors instead.  Here's what that looks like.



Simple, straightforward, self-contained, and single responsibility.  Now we just need to update the IoC registration to include them, and they'll get picked up automatically by the ProjectInstaller constructor.  We could take advantage of Autofac's fluent registration API and auto-wiring to do this without referencing the individual types.  But I have these two classes in separate namespaces (the service installer is in the same place as the service, because these must come in pairs), and the ProjectInstaller derives from the same base, it would actually take more code to deal with these special circumstances.  So explicit registration it is.


Note also here that we've registered these as InstancePerLifetimeScope, because there's really no reason more than one of either of these should be floating around.

That covers the install portion of the service.  We now have separated the three installation components into their own separate places.  Changes to the configuration of any one of them are isolated in their own locations in the code.  And as a bonus we've made the ProjectInstaller constructor trivially simple.

Let's shift gears and take a look at the service portion of the application.  We'll set up a blank bootstrapper for this portion as well. I'll refrain from posting the code as it's essentially identical to what I showed you for the other one.

The remaining two significant pieces that need to be implemented to replicate the functionality we saw in the template project are the service class itself, and the program mainline.  The mainline is going to be simple just like the ProjectInstaller.  In fact it's structure mirrors the installer very closely, just with different type names in a few strategic places.


I've chosen to request an IEnumerable, even though we only have one implementation.  Maybe we'll add more services to this assembly some day, and maybe we won't.  To make a decision on that basis triggers my YAGNI reflex.  I used an IEnumerable resolver just for my own aesthetic sense, because the framework runner takes a collection anyway.

Next let's look at the service class itself.  This is another class that, in the template project, had a lot of noise that we identified as unnecessary.  So in this iteration, it's going to be comparatively short and sweet.


There's very little to see here, and that's very much as it should be, since our service doesn't actually do anything yet.  We just cut out the noise and were left with something trivially simple.  So let's move on to the last piece left to put in place: the service bootstrapper.


We're just blasting through code files here, as this one is another very simple one, with just one registration.  We'll add more, but for right now we have completely matched the functionality of the template project.

Before we put the project to bed for another week, though, there's a small refactoring we can make that will make a nice little illustration of the benefits of how we've organized the project.  You may have noticed that we have some hard-coded strings in the constructors of the service and the service installer.  And particularly, there's one that's intentionally repeated, since it has to be the same in both places.  We can eliminate the hard-coded string and make sure that the values in both locations always match, by using the setting injection pattern I described a few months ago.

With just a few lines of code we can set up the necessary setting dependency.



With this class and interface in place, we can add a dependency to the constructors of the installer and the service.



And finally, in order to support the dependency injection, we just add one new registration to each of the bootstrappers.


That's the end of our work for this week.  We have replicated the template project's functionality, such as it is, and run through a lot of simple little snippets of code.  Structurally, the Solution Explorer doesn't look a ton different, but I think the code is simpler and more straightforward.  Plus we have built a good foundation on which to add the functionality that people expect of a service.  All the code for the project we worked on in this post can be found on GitHub at https://github.com/cammerman/turbulent-intellect/tree/master/HelloSvc.

Next week, we'll add logging to an event log, and we'll make it actually do some work.  Finally, we'll figure out how that work needs to get done, inline, or in one or more worker threads.

Anatomy of a Windows Service - Part 1

div.aside { color: #A0A0A0; margin-left: 2em; margin-right: 2em; }

This post contains gist code samples. My experiments indicate that these do not display in Google Reader, so if you are reading this there, you may want to click through to the post.

I recently had to estimate the effort for creating a one-off Windows Service for massaging some document indexing data as it passed between a capture system and an ECM system. I was quite confident in my ability to code up the "massaging" part, but the service part was unfamiliar territory. So I decided to do some investigation and figure out just what it takes to get a service set up, what special considerations need to be made for logging and injecting configuration, and how to preserve good design sensibilities through it all.

The easiest way to create a Windows service is to use the associated project template in Visual Studio. I specifically wanted to work in .NET 4.0 for the investigation, as I would be on the actual project, but unfortunately I don't personally have a license for the full version of Visual Studio 2010, and VS Express 2010 doesn't have a project template for Windows services. Fortunately, I do have VS 2008, and the BCL infrastructure for services hasn't changed between 3.5 and 4. So I decided to generate what I could in VS 2008, add some googling, and use the combined information to create a service from scratch in VS Express.

This week, we'll analyze what VS will give us for free with the project template and special code-generating actions. We'll identify the purpose of the different pieces of code, and make note of how we can use this information to build our own service, complete with installers, from scratch.

So to start, let's take a look at the structure that the project template creates for us. Here's a shot of what the initial solution looks like in a brand new Windows Service project.

Note the two structural differences between this template project and a blank or console project. There's an extra reference to System.ServiceProcess.dll, which we'll need to remember, and there is an extra component class called Service1. There's also a .Designer.cs file for Service1, but this is really just because Visual Studio automatically adds these any time an item that is a subclass of

Component

is added to your project, either via a template, or the "Add New Item" command.

Before we take a look inside the service class itself, we should see how the Program class differs from other types of project templates. Here's the Program class.

There are a few differences to note here. The biggest difference is obviously that the main method contains some generated code. It actually looks similar to what is generated with the new Windows Forms project template. We have the service class, representing the actual core of the program, being instantiated and then passed to a runner method provided by the framework that handles spinning up the appropriate context.

Now lets take a look at the service class. It comes in two parts: the main class file, and the designer file. First the main class file.

The most prominent features here is the base class,

ServiceBase

, and the two override methods, OnStart and OnStop. ServiceBase is, of course, the common base class that all .NET windows service classes must derive from, in order to take advantage of the framework support. The overrides are two of the possible entry points into the service. They correspond directly to the actions that you can take on a service in the Services control panel. OnStart is where you would place the code that processes the arguments, spins up workers, etc. Naturally OnStop is the other end of things, where you'd put your cleanup code to prepare for termination. There's also a call to InitializeComponent in the constructor, which is typical of generated Component-derived classes. We'll see if there's anything interesting happening there next, in the designer file.

There's a bunch of generated code here, but there's actually less going on. So what is all this doing? We can easily see an implementation of a Dispose idiom, which is disposing of any components contained inside this one. But when we look back at the InitializeComponent method, the Container is constructed and then left empty. The only significant code here is the assignment of the ServiceName property. We'll need to remember this for when we create our own service class from scratch. Everything else in this file could be dropped. The ServiceName initialization could just as easily be put directly into the constructor.

What we've seen so far is the absolute minimum service. If you put some behavior into the OnStart method, you could run from the command-line, and it would operate. But it wouldn't show up in the Services control panel, because it doesn't have an installer to register it. And that means you couldn't Stop it once it was started.

What does it take to register a service with the OS? VS offers a shortcut to generate the code for this as well. If you right-click the service class Service1.cs in the solution explorer, the context menu has an "Add Installer" option, which adds a new class to the solution, called ProjectInstaller. This is the last piece of our example puzzle, and it's another two-parter. First the base class file.

There's not much code here either. Like the service, we see this is a Component-derived class, by way of

Installer

) with an InitializeComponent method called in the constructor. We'll take al ook inside there in a moment. First, make note of the attribute on the class. The RunInstaller attribute is a signpost for the InstallUtil.exe installer tool that comes with the .NET Framework, or for the VS Setup Package Custom Action Installer. The Boolean true argument tells the installers that that they should indeed invoke this installer class when they find it in the assembly.

The attribute is important, but it's not the meat of the class. That's found in the designer file.

The form of this file is very similar to that of the service's designer file (and any designer file, really). Once again we see the disposal idiom, and once again, the container that it disposes is never populated with anything. The body of the InitializeComponent method is concerned with creating and initializing two more installer objects. One is a

ServiceInstaller

, which is a class that knows how to install a particular service in the registry and such, given a ServiceBase representing it. The

ServiceProcessInstaller

handles install tasks common to all services, and so there should be only one inside a given project installer.

The MSDN documentation  doesn't go into any more detail about what the responsibility differences are between these two types of installer, or why they are peers, as far as the main Installer is concerned. But it is clear about

how

they should be used. The installer objects need to have properties initialized to indicate how the install should proceed, and then they must be placed in the Installers collection. This also could just as easily be done in the constructor, eliminating the need for the designer file.

One important thing the documentation highlights is that the ServiceName property on the ServiceInstaller must match the same property on the service which it is associated with, since this is the only thing that links the service registration with the actual implementation. We'll examine the other installation-guiding properties, on both objects, in detail when we put together our own service from scratch.

For now, we've covered everything in the project template. What we have here is a complete, spartan service that does nothing. It will install, simply by compiling the executable, and then running InstallUtil.exe from the .NET framework folder against it. And it will run from the service control panel... but it has no visible side-effects.

Next time, we'll get to work replicating the moving pieces of this solution in a spanking new, blank C# 4 project. It will look a little different, structurally, but all the prominent features will still be there.

P.S. All the code that appears in this series will be available on my github page. For now, I have it in this repo:

https://github.com/cammerman/turbulent-intellect

. The code for this post in particular is found here:

https://github.com/cammerman/turbulent-intellect/tree/master/ServiceExample35

. I expect to reorganize this a bit in the future, but I'll update this link when I do so.

New Domain!

Just a little off-schedule post today to alert anyone who didn't notice the redirect: I've changed the URL of the blog to be its own domain.  I purchased the domain a couple years ago, when Blogspot's support for custom domains was... primitive.  It's a lot better now and was a breeze to set up.  So here we are!  Blogspot says the old URL will continue to redirect to the new location, if you don't feel like changing.

Now back to normalcy. Expect a post on Tuesday as usual.

Glints of Profession

Recently I caught a few tweets from Patrick Welsh where he briefly mentioned pair-programming with his young daughter while she was home sick from school. In particular the second one referenced being "green", as in all tests passing. It struck me on a couple of different levels, even beyond the "awww" factor.

(Let me not understate the "awww" factor. Stories like this set anticipation brewing in me. I can't wait to be able to share my passion for programming with my own son or daughter. Hopefully she listens better than I did when my dad attempted to enact the rituals of fatherhood with me at his workbench, under the hood of the car, and in the kitchen. Sorry Dad! If I could do it again, I'd pay attention this time around.)

This brief snapshot of their interaction conjured visions of the apprentice at the elbow of the master craftsman, absorbing subtle waves of well-worn trade wisdom. We're talking about more than just "measure twice, cut once" type of stuff here. The equivalent of that might be "fail fast". Very good, very fundamental advice, to be sure. But testing and TDD are at least one level above that.

At the level of TDD, we're dealing with explicable, transferrable techniques. Not just muscle memory, but practices. Practices that, when followed, guide the hands of the initiate on the path toward skill, consistency, and quality. To me, this is a huge indicator of mature and professional behavior on the teacher's part. And such things are themselves markers of the continuing evolution of programming from simply a job to a profession.

Compare these types of hallmarks to the ad hoc, piecemeal, on-demand way that most programmers still learn the tricks of the trade. Most of us are still tossed into the job without mentorship to speak of. We're lucky if we've been taught theory in school, but this is incomplete knowledge at best. (This is like the difference between knowing how a car works, and being able to build one.) But a relevant degree is not the path most commonly arrived by for programmers anyway. Most programmers learn through the work alone, maybe supplemented by online information gleaned from Stack Overflow questions, or blog posts where people have documented their own unique travails. Lessons are gleaned through individual trial and error, with the occasional vicarious horror story of warning.

This story is still common. Though happier alternatives are becoming more common. Many of us make quite a show of seeking to make programming a true profession. There's a Craftsmanship Manifesto. There are apprenticeships. There are actual, literal journeymen, as well as prolific, sough-ought masters. But these are all bright trappings, and it is quite easy to lose track of substance.

Profession is not found in titles and roles, or documents of good intent. Profession is not even found in the practices themselves, for TDD does not a craftsman make. Profession is found in the fact that there ARE practices which are known to improve the quality and consistency of work. Profession is found in the wisdom that is passed on by mentors, in concert with the mechanical skills of the job. Profession is found in the common platform of discipline, integrity, and mutual education upon which quality work, worthy of pride, is built.

I haven't seen what happens in Obtiva's apprenticeships, or in Uncle Bob's classes, or when Corey Haines pairs with his hosts. I trust that they do bear forward, at least in part, the evolution of our chosen trade. But in the words and excitement of Patrick Welsh's little apprentice, I can see for myself the rough diamond of profession glinting in the light.

Throwaway Projects

A big obstacle in my learning of new technologies, patterns and methods, has been my reluctance to do throwaway projects. I have a really hard time with them. Always have, in all their many forms. At times I have even actively disdained them as a useful way to learn, for various reasons.

There have been a lot of things I've been curious about and wanted to learn. I'm a programmer, so that really goes without saying. And goodness knows that, relatively unencumbered from social obligations, as geeks tend to be, I've had enough free time in my life that I could have dedicated some of it to working on some little test or toy projects to get a handle on these things. But I've always managed to convince myself it wasn't necessary, or even that it was a waste of time. Usually I accomplish this by claiming that I learn more and better from doing "useful things". Throwaway projects don't impress real constraints on you, and so they let you take liberties or make compromises that wouldn't fly when trying to actually get something done. And I was convinced that this meant the knowledge gained would be less useful or applicable to real problems.

Today I had the revelation that I couldn't possibly have been more wrong.

I was faced very recently with a number of opportunities to work on some new things, to "get stuff done", involving some technologies I'm not really at all familiar with. I initially thought to myself that this is a great opportunity to finally ramp up on some things I've been wanting to learn. But after a bit more thinking, I found myself in one case completely unsure whether I could accomplish the goal on the necessary timeline, and in the others quite certain there was no way that I could.

That's when it hit me. I realized what teachers of all stripes have known for centuries. While I've learned an awful lot about programming on the job, those fundamental lessons that my knowledge are rooted in were very rarely learned in the active pursuit of an important functional goal.

Let's first acknowledge the obvious and ubiquitous example of homework. Homework is nothing but throwaway problems. I learned the value of homework (finally) when I moved from high school, where I breezed through on in-class osmosis, to college, where info came so fast and furious that I couldn't absorb it in the desk.

In grade school, I did rote program transcription in Apple Basic. In middle school I messed around for hours, daily, in QBasic on my parents' desktop. There I played with graphics and sounds. I poked and prodded at parts of the QBasic games that were packaged with DOS. In high school I moved on to trigonometry, large-number math, a text-based RPG in the vein of LORD, 3D wireframing, and even created a graphing calculator that would automatically scale to fit one cycle of a sinusoidal function.

One useful thing I did do in QBasic was to write a file chunker that would split a file into either a particular number of pieces, or pieces of a particular size. The goal of that was to more easily share games that wouldn't fit on a single floppy, or that would take several hours to transfer over 14.4kbps modem. So, okay that was functional, and I learned a bit about file access in QBasic from it, but nothing much. Until I started to learn C++ in college, when the very first thing I did was to port my file-splitter from QBasic. I did this knowing that not only was it not very useful, but there were many freeware apps that did it better than mine. But I learned about the fundamentals of C++ syntax, style, and common pitfalls. It gave me a starting point and saved me from what would have otherwise been embarrassingly slow, trial-and-error progress on the first project of my internship.

What I missed was the now-obvious truth that being free to take liberties and make compromises that you wouldn't do in production is the perfect sandbox for making mistakes, which is a crucial part of the learning process. I had it just plain backwards.

So here I sit. Mind blown. Feeling like an idiot. I still hate doing throwaway projects, but now I know I can't afford not to. I just need to man up and fight that voice in my head that nags at me saying, "why are you writing the world's millionth blog engine when you could be working on real problems that are actually still in need of better solutions?" Because the truth is that before I can reinvent banking, for example, there are a few things I need to learn before I'll even know where to start. And the best way to do that is to play in the sandbox for a while.

Dependency Relationships and Injection Modes: Part 2

Last time, I wrote about primal dependencies and the constructor injection mode. I also mentioned that in Part 2 I would cover "the other mode". In contemplating this other mode, however, I came to the conclusion that there are in fact two other modes. I'll cover both of them here in Part 2, but first we'll tackle mutator injection and optional dependencies.

Mutator Injection

Let's get the terminology out of the way first. By mutator, I mean a property or method on a class or object that is used to change some bit of state in that class or object. By optional dependency, I simply mean one which is not required in order for the object to accomplish its primary purpose.

In most code you'll read, mutator injection is ubiquitous. However, I would argue that most are misused because the usage, the need, or both, do not fit the above description of an optional dependency. So what are people doing with dependency mutators that they shouldn't be? And what should they be doing instead? Most people use mutation when they are presented with an apparent hurdle to using another, preferable injection mode. Often this comes down to an inconvenient design choice external to the class that creates an environment where it is hard to do the right thing. The appropriate response is of course to fix the context, whenever possible.

Why You Should Prefer Alternatives

Often, a dependency mutator is used to handle post-construction initialization of a primal dependency, where the appropriate dependency instantce is not known or available when the consumer is constructed. I would argue that this is very rarely an inevitable state of affairs, and is commonly due to a dependency cycle between two layers of the architecture. (I.e. an object in one layer depends on an object in the layer below it, and another object in that layer depends on an object in the first layer.)

Intra-layer dependency cycles are almost always a bad idea, not just because it makes it tough to avoid mutator injection. Maintaining a single direction of intra-layer knowledge also helps to prevent unnecessary coupling, and it aids in keeping each layer cohesive. But an extra benefit of fixing this situation is that it will typically allow for the primal dependency to be created or retrieved when it is needed for the constructor injection, rather than by the consumer or a manager object.

Why does this matter so much? Most simply, because mutator injection is a state change. And state changes are big potential points of failure. It's also well-established that it's easier to reason about immutable data structures. I've found this to be true in my own experience as well. It's easier to compose behavior out of pieces that exhibit immutability because it simplifies the interaction graph. Furthermore, when things go wrong fewer state changes means a shorter list of possible places where the bad state may have been triggered. So when designing my classes, I avoid mutators wherever I can, and that means preferring constructor injection and site injection over mutator injection whenever possible.

The other way that mutator injection is mis-used is in a situation I briefly mentioned in Part 1: when only a subset of the operations on a class actually make use of the dependency. Mutators are often seen as convenient in this situation because they cut down long constructor argument lists, which are seen as messy or complex. The fact that the object has at least a limited use without the dependency makes it possible to get away with setting it later.

In Part 1 I said that the situation itself is often due to a violation of the Single Responsibility Principle. It would be facile to cite this reason for all instances of mutator injection. Sometimes it's true and sometimes not. When it really isn't true, there are a couple of other options available before you have to resort to mutator injection.

The first option is again to drop back to constructor injection, but to do so with a sort of lazy proxy in its place. The lazy proxy exists for the sole purpose of deferring the resolution of the actual service dependency until it's needed. This is its Single Responsibility. It can then either provide an instance on demand, or it can be more transparent and just implement the same interface and delegate to the underlying object.

In a static language like C#, this isn't always possible either. Sometimes there's no interface, and some of the members are not virtual, so they can't be overridden in a derived class. If you control the code of this object, I highly recommend refactoring to an interface to resolve this problem. But if you don't, it might be time to look at the third injection mode: site injection.

Temporal Dependencies and Site Injection

Site injection is a fancy name for a method argument. It describes the use of a service as an argument to the operation for which it will be used. Such a dependency, that is passed directly to an operation, and for which no reference is maintained after that operation completes, is called a "temporal dependency". The reference is fleeting, and not actually part of the state of the object that depends on it.

There are a couple consequences of this type of injection. First is that the calling object or objects become the primary dependents of the service. It may or may not be easier to manage the dependency there. Hopefully it is, but sometimes it isn't, especially if there are many different objects that may call on this operation.

The other consequence of site injection is that it can add noise to the interface/API. If there are multiple operations in the subset of the dependent object that make use of the service, the repeated passing of the service into the objects can feel like unnecessary overhead. Not performance-wise, but in terms of keeping the code clean and readable. Expression of intent is paramount here. If the extra arguments communicate the true relationship between these two pieces of your architecture, then it's not truly noise. Instead it's a signal to readers of what's going on, and who's doing what.

There are lots of places where site injection and temporal dependencies are very natural. So many that you probably use them without even thinking about it, and it's often the most appropriate thing to do.

Truly Optional Dependencies

That's probably enough about what isn't an optional dependency. So what is? I would argue very few things are. Usually an optional dependency crops up as something that can be null when you want to do nothing, or can have an instance provided when you want to do something. For example, one step of an algorithm template, or a logging or alert solution with a silent mode. Often, even these can be treated as primal dependencies from the consumer's perspective, with a bonus of avoiding null-check noise. Just provide a Null Object by default, and make a setter available to swap the instance out as necessary. If you can say with certainty that this is not appropriate or feasible, then you may have a truly optional dependency.

A Good Start

I believe these three dependency relationships, and the injection modes associated with them, cover most of the scenarios that you'll encounter. And I've found that the advice I've included for normalizing to these modes has helped improve the design of my code in a very real and tangible way. But I'm certain my coverage isn't complete. There are surely more dependency relationships that I haven't identified here, especially if you dig into the nuances. I'd love to hear about other types of dependency relationships you've encountered if you're willing to share them in the comments.

Dependency Relationships and Injection Modes: Part 1

This week I'm going to continue my trend of adding to the cacophony of people talking about IoC and DI. There's a lot of talking, and a lot of it is noise. Not inaccuracy, mind you. Just an awful lot of shallow coverage, one step beyond assumptions and first principles, with not a lot of real analysis. So let me bring a bit of deliberate consideration and analysis to the table, and hopefully improve the signal to noise ratio.

What's different this week from the recent past is that I am specifically going to talk about DI for once, rather than IoC in general. The particular topic I want to address is something that I've seen addressed a few times, but usually in passing reference in the context of some other argument. And most often it's being both taken for granted and dismissed out of hand, by the respective debaters.

In most languages there are essentially two injection modes available: via a constructor argument, or via a mutator such as a property setter or method. I've found that each option is most appropriate in different situations. A constructor argument dependency expresses a relationship that is very different than that expressed by a mutator.

Conveniently, there are also two broad types of dependencies that map fairly well to these two modes. The first is sometimes termed a "primal dependency". (I'd love to credit the inventor of this term, but all i can find in a Google search is Jimmy Bogard, Tim Barcz, and Steve Harman all using the term as if it already existed.) The second type of relationship is most often called an "optional dependency", which is a much less evocative term, but it's a descriptive one nonetheless.

The relationship that is being evoked by the term "primal dependency" is one of raw, fundamental need. A primal dependency is one that must be satisfied in order for the class to even begin to do its duty. When a primal dependency is unfilled, the dependent object is essentially incomplete and nonfunctional. It's in an invalid state and can't properly be used.... At least not if the SRP, or Single Responsibility Principle (PDF link), is being obeyed.

The SRP is one reason why I advocate using constructor injection for primal dependencies. Constructor injection makes it explicit and unavoidable that a given dependency is a requirement for proper functioning of the object. If a dependency could be provided via a mutator without affecting the post-construction usefulness of the object, then really you have one or both of two situations. Either it's not really a primal dependency, or your object is responsible for more than one thing and this dependency is only required for some of those things. Using constructor injection avoids the possibility of this signal being hidden behind the guise of post-construction initialization.

When people oppose constructor injection, it's often on the claim that it puts an undue burden on the object performing the instantiation. While it's true that a burden is being transferred, I'd argue that it's not undue. Construction and initialization of an object into a fully valid state should be considered to be two parts of the same operation, the same responsibility. Furthermore, this responsibility is separate from the primary purpose for the existence of the object itself, and so it doesn't belong to the object itself. This is what DI is meant to resolve. Yet, the logic must be taken a step further.

If construction and initialization is truly a multi-step process pulling from different domains, and/or which can't cleanly be handled in one place, then it's a responsibility that's complex enough to base on entire object around. In short, you should strongly consider setting up a factory or builder object to handle this task. The only extra burden this leaves is on the brain of the programmer who is unfamiliar with the pattern. But trust me: just like with any exercise, the burden gets lighter with repetition.

I have a confession to make: I had never actually used the builder pattern until I came to this revelation. I didn't really understand what the benefit was over a factory. And I'd only taken the most rudimentary advantage of factories, for implementing concrete implementations of abstract base classes. But now each is as clear, distinct, and obvious of purpose to me as a screwdriver and a hammer. You'll find they come in quite handy, and really do clean up the consuming code, once you acknowledge that construction and initialization should often be a proper object responsibility all on its own.

In Part 2 of this topic, I'll address optional dependencies and the mutator injection mode. Like today's topic, they're deceptively simple to describe. But there is just as much, if not more, nuance to be found in the manner and purpose of their implementation.

Data Plus Behavior: A Brief Taxonomy of Classes

When I was first learning about classes and object-oriented design in college, classes were explained to us cutely as "data plus behavior". This is oversimplified and inadequate in the same way as would be an explanation of the Cartesian Plane as "a horizontal axis plus a vertical axis". This neat package doesn't begin to encompass all that the Cartesian Plane is, especially compared with its constituent parts.

Through my own work and experience, I've found it useful to break classes down into four different categorizations that better express the continuum of data/behavior relationships. Each category serves a different class of needs, and each interacts with other classes and categories in consistent ways. The four categories, as I have identified them, are data, algorithms, stateful algorithms, and models. Let's take a look at each of these in turn. We'll go over the traits that qualify each, the roles they serve, and their typical relationships with other classes.

Data

Data classes are quite straightforward. They are "pure" data, with little to no behavior attached. Your typical "data transfer objects", primitives, structs, event argument classes, etc. fall into this category. There are few, if any, methods defined on this type of class. When a data class does have methods, they will typically be related to type conversion or projection, simple derived value calculation, or state change validation. However, methods offering complex mutability or interactions with other objects would push a class out of this category and toward the "model" category.

Classes in this category are most often composed completely of other data classes. When I was initially considering other qualifications, I thought immutability would a be a big one, but I've found that's not necessarily true. Rather, it's dependency on services (especially stateful algorithms) that almost certainly indicates that a class is a model rather than a data class. Data classes very rarely have more than a passing dependency on external behavior, though they may have multiple dependencies on external state, made primarily of other data classes. This makes sense, since the most common thing to do with algorithms, stateful or otherwise, is to compose them into more complex behavior.

Data classes are the quanta of information in your application. They will be passed around and between other classes, created and processed by them in various ways. They exist first and foremost to be handled. They are rarely seen as dependencies except indirectly via configuration layers, but rather show up most commonly as state, or as operational inputs.

Algorithms

An algorithm is a class that is essentially functional, at least in spirit. A strong indicator that you have an algorithm on your hands is when a development tool such as Resharper suggests that methods, or the class itself, be made static. This generally indicates a near-complete lack of state.

One common application of algorithm classes is as a replacement for what would be a method library in a language which allows loose methods. For example a string library, or a numeric processing library. In fact, data processing in general is one big problem space that lends itself to algorithm classes. Also, many of the methods seen on primitives in languages such as C# could very easily be extracted into algorithm classes.

Algorithms rarely have public properties and almost never have public setters. In general algorithm classes have the potential to be expressed even without private properties, i.e. as truly functional code. Usually this means handling data that would otherwise be stored in properties as arguments instead. But often the realities of memory and processing power limitations mean that this isn't actually possible. So complex algorithm classes do often have some private state.

Depending on the amount of internal state required for performant execution of the algorithm, such classes may be expressed as either static/singleton, or by instance. For algorithms requiring no internal state, I prefer a container-enforced singleton pattern (as described in my post on lifecycle control via IoC containers). Static classes encourage tight coupling, and are very difficult to sub out for testing, even with a powerful isolation tool such as Microsoft's Moles,Telerik's JustMock, or TypeMock Isolator. As such, I avoid statics like the plague.

Instanced algorithm classes are usually lightweight and treated as throw-away objects, repeatedly created and disposed. Some may also choose to implement them with a reset/clear mechanism to allow reuse of the same object for different inputs. However, given the memory and garbage collection power currently available, I view this as an optimization, and avoid it unless it provides clear, needed performance benefits. Otherwise, it simply adds complexity to the implementation and muddies the interface with methods that don't directly relate to its purpose.

Algorithm classes often have dependencies on other algorithms. Data dependencies, however, are usually just another form of argument for the algorithm. Something that affects the output of the work from one instance to another, but is likely constant within one instance. Usually this is data that simply isn't going to be available at the call site where the algorithm will be triggered. Environmental state is one example. Mode switches are another. Anything more complicated or mutable than this is an indicator that you might in reality have a stateful algorithm on your hands.

Stateful Algorithms

The simple explanation of a stateful algorithm is that it is a bundle of behavior with a limited reliance on internal or external state. It differs from a pure algorithm in that the internal state isn't necessarily directly related to the processing itself. The most common application of a stateful algorithm is a class whose primary purpose is to expose the behavior of a model, while internally managing and obscuring the state complexities of that model. The goal of this being to encapsulate in a service layer all the concerns that the other layers need not worry about explicitly.

IO services are common instances of these types of classes. Some examples include file streams, scanner services, or printer services. Any time a complex API is wrapped so as to expose a simple interface for bringing in or flushing out data in the natural format of the application, a stateful class is the most likely mechanism. Looking at the file stream example, the state involved might consist of a buffer, a handle to the endpoint of the output, and parameters regarding how the data should be accessed (such as the file access sharing mode).

Stateful algorithms may take dependencies on any other type of class, but more commonly pure or stateful algorithms. Stateful algorithms are more likely to interact with data classes and models as arguments, or internally in the course of API wrapping.

Models

Models are a diverse, messy bunch. They're probably the closest to fitting the classical "data plus behavior" description. A class in this category will have intertwined state and behavior. The state will typically be of value without the behavior, but the behavior exists only in the context of the state. Most often, a model class comes about not because extracting the behavior and state into separate classes is impossible, but rather because it muddies the expressiveness of the design. Domain models, UI classes, and device APIs are the places where the model category of classes tend to serve best. Note, however, that these spaces also tend to attract convoluted coupling, inscrutable interfaces, unintuitive layering, and a host of other design pathologies.

These warts proliferate because it is difficult to make generalizations or rules about models. The foremost rule I keep in dealing with them is to try to avoid them. Not in the "model classes are considered harmful" sort of way. If you look at a program I've written, you won't find it devoid of model classes. In fact, they aren't even particularly rare. When I say I try to avoid them, all I really mean is that before I write a model class I make sure that the role I'm trying to fill isn't better addressed by some at least relatively simple combination of the other categories.

It's common for the behavior of a model class to depend on some private aspect of the state. Separating the behavior from the state would thus also mean dividing the state into public and private portions, to be referenced by the consumers of the model, and the model's services, respectively. This can be a nightmare to maintain, and the service implementor must often make a decision between downcasting a public data interface to a compound public/private one, or internally mapping public state objects or interfaces to their associated private state objects or interfaces. You see this type of thing crop up repeatedly in ORM solutions, where the otherwise internal state of the change and identity tracking is pulled out of the model and maintained by the ORM.

This separation is difficult and messy to make. If you find yourself facing this choice, you can be fairly certain that a model solution is at least an acceptable fit for the role. There are benefits to the separation, but often the cost of implementation is high, and should be deferred until necessary. But beware as you forge ahead with the consolidated model, because every model is different. I find that the cleanest way to incorporate a model into a project is to establish a layer in which that model and its service classes will live, and make heavy use of data transfer objects and projections to inter-operate with other layers.

A model can, and often does take dependencies on any category of class. Because of the varied roles models can serve and the many ways that state and behavior may be mixed in them, it's very difficult to identify any patterns or rules as far as how dependencies typically arise and how they should be handled. Any such effort would likely hinge on further subdividing the categories into common roles and basing the rules and patterns at that level instead. I'll leave that as an exercise for the reader, for now. =)

A Guideline

As I've said, I find these categories to be a useful guideline in my own programming efforts. It's quite easy and natural to mix state and behavior together in most every class you write, simply because it's convenient on the spot to have behavior neighboring the state it's relevant to. But, it tends to be far easier to reason about the first three categories in the abstract than it is to do so about models. This is likely because the former tend to enable and encourage immutability, which minimizes and simplifies a whole host of problems from concurrency, to persistence, to testing. For this reason I try to identify places that lend themselves well to the extraction of a data class or an algorithm class.

This strategy seems to make my code more composable, and my applications more discrete in terms of purpose and dependency. Which in turn makes the divisions between the different layers and functionality regions more clear and firm. This not only discourages leakage of concerns, but also makes the design more digestible to the reader. And that is always a good thing, even if the reader is just yourself 6 months later.

What Your IoC Container Won't Tell You About Inversion of Control

As I've written about recently, the term "Inversion of Control" has come to be closely associated with "Dependency Injection". Indeed many people tend to use the two terms interchangeably. The reality is that DI is just one way of achieving one kind of limited IoC.

It's worth considering that something cannot be inverted without there being an original, un-inverted state. So what is that state, and why did it become so important to invert it? According to Martin Fowler's Bliki post on IoC, the state in question is which code is in control of the flow of the program within a particular scope. Fowler credits Ralph Johnson and Brian Foote's 1988 paper Designing Reusable Classes, with originally identifying the boundaries between system layers, or between a framework and the consumer of that framework, as being particularly valuable places to invert control.

The goal of the inversion, as stated by Fowler, Johnson, and Foote, is essentially to let a service remain in control of the pieces of its execution, rather than to micromanage it from the outside, like a boss who insists on being kept "in the loop". I would say this is dramatically different from what IoC has come to mean in the common parlance today.

IoC in its common usage has become divorced from the Hollywood Principle, which was about allowing a service to retain control of its own execution. Instead it has been married to DI, which is about a service divesting responsibility for ancillary concerns. The latter is equally important. But only equally. When they are brought together, they enhance each other and the resulting clarity of the design is more than the sum of its parts.

There is a duality here that is important to keep in mind when designing classes and interfaces. A duality of scope, a duality of perspective, a duality of responsibility, a duality of control. There is both give and take. By giving one thing, we take another. IoC is not just about giving up control and responsibility for choosing and creating dependency implementations. It's also about taking control and responsibility for the cohesion of the solution, the algorithm, and the facade that's offered by a service, a framework, or a layer.

It is far too easy, when one first delves into DI, especially aided by the power tool that is an IoC container, to forget this other side of things. We become satisfied that as long as we don't have a "new" in our constructor, we must be on the right track. But sooner or later you will end up in a situation where the discipline of DI becomes very difficult to maintain. Without the context of where DI came from, the limits of its purpose, and the parallel concerns that were meant to be addressed alongside, it won't be clear what is the proper way to proceed. You may realize that it's time to "break the rules", but you won't know what new rules then take over.

Speaking from experience I can tell you that this way lies madness. Or at least messiness. Potentially maddening messiness. Far too many people simply shed the rules as they pass this boundary and revert back to wild and woolly frontier-style programming. But the truth is that you aren't passing from civilization into the wilderness. You may be leaving one regime, but you're also entering another. And the guiding light of this new regime isn't just "whatever works". Most likely it's simply time to broaden your perspective beyond the issue of class dependencies and Separation of Concerns.

Start to look at the surface of the layer, model, framework, or module that the problem classes are a part of. Consider all of its responsibilities, dependencies, entry points, hooks, and outputs. If you are encountering problems making DI work, it's very possible that a mess has begun to accrete in one of these other aspects. If you step back to get the broader perspective, it may be easier to identify the problem as a flaw in the cohesion of it as a whole.

In particular, look at the ways your objects are working together, and the way the whole layer interacts with its consumers. A clear story should show up somewhere. Yes you may have a hundred small classes running around taking care of little chunks of the work. But someone should be "conducting" these bits as they work together to accomplish the goal, and that someone should be chosen and designed deliberately.

It's a common mistake to break responsibilities down into bits and then just toss them all together with no real clear roadmap emerging from the noise. This is a real problem, and the cause is a proliferation of delegation in the interest of decoupling, combined with a failure to take the reigns of regional control that ensures cohesion. Weaving together the dual needs of decoupling and cohesion is what Inversion of Control is really about.

On the Dimensions of My Ignorance

Yesterday I posted what some may view as a rant, on how troublesome it is to get started with REST web services in .NET. It wasn't intended to be a rant so much as a cathartic walk through the research I had done and the decisions I had made, over several hours of trying to find an easy on-ramp to REST on .NET.

One of the things I mentioned was the fact that I downloaded the OpenRasta source, but was unable to build it due to a couple compiler errors. I had been in loose contact with Sebastien Lambla, the author of OpenRasta, about this situation over Twitter. Yesterday he read my post and, not undeservedly, jabbed me about not trying to fix the bug myself, instead opting to waste time "touring frameworks" to find what I need.

I got defensive, of course. I felt like I clearly conveyed my situation, and why I went "touring", rather than just settling on one and pushing through the hurdles. But looking back I can see that information is distributed throughout the post, and a skimmer would likely see only that I had looked at, and whiningly dismissed, each option in turn. Which is factual, though not really complete.

So here I will reiterate that I am:
  1. A REST noob
  2. A web services noob
  3. An ASP.NET noob
  4. An OSS noob
  5. A Git noob

And that doesn't even consider Sinatra, which brings to bear the additional facts that I am:
  1. A Ruby noob
  2. A Linux noob

I know my limitations. Or rather, I know where my knowledge ends. The reason I went "touring" is because I have ideas and I want to start to make them real. I didn't want to go burrowing into some rabbit hole not knowing if it was even the right one. I want to find a framework that minimizes the number of dimensions of my ignorance that are likely to limit or undermine my ability to produce something minimally functional.

There are a lot of things I am experienced with and good at, but the list above consists of things that I don't know enough about to know when I am making a mess for myself. Without guidance, I don't even know where to start. When OpenRasta wouldn't build, I didn't even bother to look at the code, because I didn't want to get lost in the weeds due to my multiple dimensions of ignorance. It turned out to be something that I probably could have handled, so shame on me for not at least taking a closer look. I'll take responsibility for that and tuck the critique away for future growth potential.

It's tough enough to be a noob about one thing. But ignorance compounds and grows exponentially. So it's downright painful and demoralizing to be on a path where, every time you hit a roadblock, you don't even know which of the multiple gaping holes in your experience is the culprit. It's far too easy to end up chasing down red herrings or just plain make matters worse for yourself in that situation. And I just want to get stuff done.

Fortunately for me, Sebastien says that the OpenRasta stable branch is actually stable now, so I can take another bash at that. I also learned just an hour after my last post went up that there are two more .NET REST frameworks to check out. RestPoint appears to be a young, attribute-based framework. And Nina claims to be a mature, Sinatra-inspired, lambda-based framework.

I'll admit that I really don't like attribute-based APIs, but compared to the challenges I'd face with all the other options I've looked at, I can probably stomach it. But I am a big fan of lambda-based APIs, and I'm still stoked about the Sinatra approach, so I'm pretty excited to take a look at Nina. It looks like it might just be what I was hoping Nancy would be.

And on a parallel track, I plan to start filling in at least one of my other knowledge gaps: Git. I have the PragProg book on Git, but I think I may also invest in the TekPub videos on Git to get started.