Going off the Reservation with Constructor Logic, Part 2


First I want to say thanks to Alex for the feedback on my previous post. I did eventually manage to clean up my design quite a bit, but couldn't really put my finger on why it was so hard. After reading Alex's comment it occured to me that I hadn't really thought very hard on where the layer boundaries are. I had (and maybe still have) my persistence, my service, and my infrastructure layers somewhat intertwined. I'm ashamed to have been bitten by this considering how much of a proponent of intentional design and layering as I've been.  But I was on a spike to tackle some unfamiliar territory, as well as fighting with a bit of an uncooperative helper framework, so it seems I bit off more than I could chew.

The problem I was trying to deal with was a multi-threaded service, with some threads producing data, and others consuming it to store to a DB. Between device sessions, threads, DB sessions, and domain transactions, I had a lot of "resources" and "lifetimes" that needed very careful managing. It seemed like the perfect opportunity to test the limits of the pattern I've been fond of lately, of creating objects whose primary responsibility is managing lifetime. All services which operate within the context of such a lifetime must then take an instance of the lifetime object.  This is all well and good, until you start confusing the context scoping rules.

Below is a sample of my code. I'd have to post the whole project to really give a picture of the concentric scopes, and that would be at least overwhelming and at worst a violation of my company's IP policies. So this contains just the skeleton of the persistence bits.



The important part of this picture is the following dependency chain (from dependent to dependency): Repository->DataContext->SessionProvider->UnitOfWork. What this would indicate is that before you can get a Repository object, you must have a Unit Of Work first. But it's common in our web apps to basically just make the UoW a singleton, created at the beginning of a request, and disposed at the end. Whereas in a service or desktop app the most obvious/direct implementation path is to have different transactions occurring all over the place, with no such implicit static state to indicate where one ends and the next begins. You get a common workflow that looks like this: Create UoW, save data to Repo, Commit UoW. This doesn't work nicely with a naive IoC setup because the UoW, being a per-dependency object now, is created after the Repo, which already has its own UoW. The changes are in the latter, but it's the former that gets committed. So either the UoW must be created at a higher scope, or the Repo must be created at a lower scope, such as via a separate factory taking a UoW as an argument. I'm not super fond of this because it causes the UoW to sort of vanish from the code where it is most relevant.

One final option is that the dependency chain must change. In my case, I realized that what I really had was an implicit Domain Transaction or Domain Command that wasn't following the pattern of having an object to manage lifetime scope. The fact that this transaction scope was entirely implicit, and its state split across multiple levels of the existing scope hierarchy, was one of the things that was causing me so much confusion. So I solved the issue by setting up my IoC to allow one UoW and one Repo per "lifetime scope" (Autofac's term for child container). Then I created a new domain command object for the operation which would take those two as dependencies. Finally the key: I configured Autofac to spawn a new "lifetime scope" each time one of these domain command objects is created.

Again this illustration is a bit simplified because I was also working with context objects for threads and device sessions as well, all nested in a very particular way to ensure that data flows properly at precise timings. Building disposable scopes upon disposable scopes upon disposable scopes is what got me confused.

I have to say that at this point, I feel like the composability of the model is good, now that I've got layer responsibilities separated. But that it feels like a lot is still too implicit. There is behavior that has moved from being explicit, if complex, code within one object, to being emergent from the composition of multiple objects. I'm not convinced yet that this is an inherently bad thing, but one thing is for certain: it's not going to be covered properly with just unit tests. If this is a pattern I want to continue to follow, I need to get serious about behavioral/integration testing.

Going off the Reservation with Constructor Logic

I'm noticing a strange thing happening as I make classes more composable and immutable. Especially as I allow object lifetime to carry more meaning, in the form of context or transaction objects. This shift has made object initialization involve a lot of "real" computation, querying, etc. Combined with a desire to avoid explicit factories where possible, to capitalize on the power of my IoC container, this has led to an accumulation of logic in and around constructors.

I'm not sure how I feel about so much logic in constructors. I remember being told, I can't remember when or where, that constructors should not contain complex logic. And while I wouldn't call this complex, it certainly is crucial, core logic, such as querying the database or spinning up threads. It feels like maybe the wrong place for the work to happen, but I can't put my finger on why.

It's important to me to be deliberate, and I do have a definite reason for heading down this path. So I don't want to turn away from it without a definite reason. Plus, pulling the logic out into a factory almost certainly means writing a lot more code, as it steps in between my constructors and the IoC container, requiring manual pass-through of dependencies.

I'm not certain what the alternative is. I like the idea of object lifetimes being meaningful and bestowing context to the things below them in the object graph. Especially if it can be done via child containers and lifetime scoping as supported by the IoC container. But it is really causing me some headaches right now.

My guess at this point is that I am doing too much "asking" in my constructors, and not enough telling. I've done this because to do the telling in a factory would mean I need to explicitly provide some of the constructor parameters. But I don't want to lose the IoC's help for providing the remainder. So I think I need to start figuring out what my IoC can really do as far as currying the constructor parameters not involved in the logic, so that I can have the best of both worlds.

Maybe it will bite me in the end and I'll pull back to a more traditional, less composable strategy. Maybe the headaches are a necessary consequence of my strategy. But I think it's worth trying. At least if I fail I will know exactly why not to do this, and I can move on to wholeheartedly looking for alternatives.

Thoughts, suggestions, criticisms? Condemnations?

Mindfulness

You are faced with an unfamiliar coding problem. You know what you're "supposed to do"--use this pattern, follow that convention--but you don't know why. And the guidance seems ever so slightly... off. The context is just a bit different. The constraints are a bit tighter here. A bit looser there. The peg doesn't quite fit in the hole, but everyone is telling you it should.

Everyone is telling you "this is how you do it" or "this is how we've always done it". You think you can follow the rule, but you don't think it will be pretty. What do you do?

I have witnessed many people in this situation make their choice. Most commonly they do whatever it takes to follow the rule. If the peg doesn't fit, they pound it till it's the right shape. If the cloth won't cover, they'll stretch, fold, and tear creatively until they can make a seam. Somewhat less often, they just toss out the advice and do it the way they are comfortable with. Often that just means "hacking out" an ad hoc solution.

In either case, the person learns nothing. They will repeat their struggle the next time they face a situation that is ever so slightly different. They will sweat, stress, and hack again. Over and over, until someone shows them a new rule to follow.

As students we are taught to learn first by rote, then by rule. If we are lucky, we are eventually tested on whether we have learned to think. And most commonly we manage to slog it out and do well enough to pass without actually being required to think. It is so very easy to become comfortable with this model of learning. A particular someone, vested with the responsibility of fertilizing our minds and nurturing the growth of understanding, knows the answers. We don't understand, but someone can at least tell us when we are right or wrong, and often that's enough.

We settle into dependence. And by settling, we establish roadblocks in our path before we even set foot on the road. By settling, we refuse to take ownership of our own knowledge. We put ourselves at the mercy of those who know more and are willing to share of their time and understanding to help us overcome obstacles of our own creation.

This situation is not inevitable. You can avoid dooming yourself to toil under it with a very simple determination. But make no mistake, this simple determination will require determination. When you face the prospect of doing something that you do not understand, stop, take note, and ask "Why?" Refuse to continue on with any course of action until you know why you are doing it.

I'm not talking about questioning authority here (though that's good too). I am advocating understanding. If you think there's any chance you may have to face a similar situation again, then as a professional developer it behooves you to understand what you're doing this time, and why. This prepares you firstly to defend your actions, and secondly to tackle similar but different problems in the future.

By recognizing the reasoning behind a particular prescribed course of action, when you encounter a similar situation in the future you will be able to identify the subset of the problem that is subject to the prescription. Seeing this allows you to conceptually distinguish that part of the problem from the rest. From this vantage point you can decide whether the remainder is just noise that has to be accommodated, or something more significant. You will be able to start to consider whether there is another layer or dimension to the problem which might be better served by a different or additional pattern. You will be able to think, intelligently, and intentionally, about the problem both as a whole, and in part.

Lack of mindfulness is the scourge of intellectual pursuits (and a great many other things in life). Whether in programming, in health, in investment, etc., it binds you to the service of rules and systems. It puts you at the mercy of those who have understanding, and under the thumb of those who own the systems. Benevolent or otherwise, do you really want your own success and satisfaction to come at the whim of someone else, for no other reason than that you couldn't be bothered to put in the effort to understand? Do you want to spend your career tromping the same grounds over and over again, never coming to any familiarity or understanding of the landscape?

Always ask, "Why?" Then take the time to understand. Always be deliberate and intentional about applying any solution. Don't just follow the directions of the crowd, or some authority. You're not a patient taking orders from your doctor. This is your domain. Own your knowledge. Your future self will thank you.

YAGNI Abuse

Have you ever proposed a code change or a course of action in a project for the purpose of improving the stability and maintainability of the code base, only to have someone dispute the need on the basis of

YAGNI

? I was flummoxed the first time this happened to me. Since then I've learned that it's not at all rare, and in fact may even be common.

The YAGNI principle is a wonderful thing. Used properly, it can have a huge beneficial impact on your productivity, your schedule, and on the maintainability of your product. But like so many other important ideas in the history of software development, YAGNI has become a poorly understood and misused victim of its fame. Through constant abuse it has become difficult to communicate the sentiment that it was intended for without a thorough explanation. And I can't count the number of times I've heard YAGNI cited in a completely incorrect or even dangerous way.

The term "YAGNI" has fallen prey to a similar disease as "agile". People often invoke it as an excuse not to do something that they don't want to do. Unfortunately, this quite often includes things that they

should

do. Things that have long constituted good design, and good software engineering practice. A few examples of things that I have personally been horrified to hear disputed on YAGNI grounds include:

These are all activities that are strongly valued and diligently practiced in the most productive, successful, and small-A-agile software development organizations and communities. For myself and many of you out there, it's patently obvious that this is a subversion and abuse of the YAGNI strategy. Your first instinct in response to this kind of misuse is to say, with no little conviction, "that's

not

what YAGNI means."

This, of course, will not convince anyone who has actually attempted to use the YAGNI defense to avoid good engineering practices. But to refute them, you needn't rely solely on the forceful recitation of the principle as they do. Fortunately for us, YAGNI is not an elemental, indivisible tenet of software engineering. It did not spring fully-formed from Ron Jeffries' head. Rather it is based in experience, observation, and analysis.

It is clear from reading

Jeffries' post

on the XP page that the key to sensible use of the YAGNI principle is remembering that it tells you not to add something that you think you might or probably will need, or even certainly will need, in the future. YAGNI is a response to the urge to add complexity that is not bringing you closer to the immediate goal. Particularly common instances of true YAGNI center on features that haven't been identified as either crucial or wanted, such as configurability, alternative logging mechanisms, or remote notifications.

Looking at my original list, it is clear that none of these things truly add complexity in this way. A very naive metric of complexity such as "number of code entities" may seem to indicate the opposite. But these are actually all established and reliable methods for

controlling

complexity. What these techniques all have in common is that they restrict the ways in which parts of your program are allowed to interact with other parts of your program. The interaction graph is the most dangerous place for complexity to manifest in a program because it compounds the difficulty of changing any one part of the application without affecting the rest of it like a row of dominoes. The practices I identified above, which are so often refuted as "adding complexity", are some of the many ways to guide your application toward this:

And away from this:

There is a multitude practices, patterns, and design principles that help keep your modules small, their scopes limited, and their boundaries well-defined. YAGNI is one of them, but not the only one. Claiming YAGNI to avoid this kind of work is

"not even wrong"

. Not only are you

gonna

need it, but you

do

need it, right from day one. Working without these tools is seeding the ground of your project with the thorns and weeds of complexity. They provide you with a way to keep your code garden weed-free. In this way they are kin to YAGNI, not its enemy. Claiming otherwise reveals either a disrespect for, or a lack of understanding of, the benefits of good design and engineering practices in a general sense. So next time someone sets up this contradiction in front of you, don't let them get away with it. Show your knowledge, and stand up for quality and craft.

Data Plus Behavior: A Brief Taxonomy of Classes

When I was first learning about classes and object-oriented design in college, classes were explained to us cutely as "data plus behavior". This is oversimplified and inadequate in the same way as would be an explanation of the Cartesian Plane as "a horizontal axis plus a vertical axis". This neat package doesn't begin to encompass all that the Cartesian Plane is, especially compared with its constituent parts.

Through my own work and experience, I've found it useful to break classes down into four different categorizations that better express the continuum of data/behavior relationships. Each category serves a different class of needs, and each interacts with other classes and categories in consistent ways. The four categories, as I have identified them, are data, algorithms, stateful algorithms, and models. Let's take a look at each of these in turn. We'll go over the traits that qualify each, the roles they serve, and their typical relationships with other classes.

Data

Data classes are quite straightforward. They are "pure" data, with little to no behavior attached. Your typical "data transfer objects", primitives, structs, event argument classes, etc. fall into this category. There are few, if any, methods defined on this type of class. When a data class does have methods, they will typically be related to type conversion or projection, simple derived value calculation, or state change validation. However, methods offering complex mutability or interactions with other objects would push a class out of this category and toward the "model" category.

Classes in this category are most often composed completely of other data classes. When I was initially considering other qualifications, I thought immutability would a be a big one, but I've found that's not necessarily true. Rather, it's dependency on services (especially stateful algorithms) that almost certainly indicates that a class is a model rather than a data class. Data classes very rarely have more than a passing dependency on external behavior, though they may have multiple dependencies on external state, made primarily of other data classes. This makes sense, since the most common thing to do with algorithms, stateful or otherwise, is to compose them into more complex behavior.

Data classes are the quanta of information in your application. They will be passed around and between other classes, created and processed by them in various ways. They exist first and foremost to be handled. They are rarely seen as dependencies except indirectly via configuration layers, but rather show up most commonly as state, or as operational inputs.

Algorithms

An algorithm is a class that is essentially functional, at least in spirit. A strong indicator that you have an algorithm on your hands is when a development tool such as Resharper suggests that methods, or the class itself, be made static. This generally indicates a near-complete lack of state.

One common application of algorithm classes is as a replacement for what would be a method library in a language which allows loose methods. For example a string library, or a numeric processing library. In fact, data processing in general is one big problem space that lends itself to algorithm classes. Also, many of the methods seen on primitives in languages such as C# could very easily be extracted into algorithm classes.

Algorithms rarely have public properties and almost never have public setters. In general algorithm classes have the potential to be expressed even without private properties, i.e. as truly functional code. Usually this means handling data that would otherwise be stored in properties as arguments instead. But often the realities of memory and processing power limitations mean that this isn't actually possible. So complex algorithm classes do often have some private state.

Depending on the amount of internal state required for performant execution of the algorithm, such classes may be expressed as either static/singleton, or by instance. For algorithms requiring no internal state, I prefer a container-enforced singleton pattern (as described in my post on lifecycle control via IoC containers). Static classes encourage tight coupling, and are very difficult to sub out for testing, even with a powerful isolation tool such as Microsoft's Moles,Telerik's JustMock, or TypeMock Isolator. As such, I avoid statics like the plague.

Instanced algorithm classes are usually lightweight and treated as throw-away objects, repeatedly created and disposed. Some may also choose to implement them with a reset/clear mechanism to allow reuse of the same object for different inputs. However, given the memory and garbage collection power currently available, I view this as an optimization, and avoid it unless it provides clear, needed performance benefits. Otherwise, it simply adds complexity to the implementation and muddies the interface with methods that don't directly relate to its purpose.

Algorithm classes often have dependencies on other algorithms. Data dependencies, however, are usually just another form of argument for the algorithm. Something that affects the output of the work from one instance to another, but is likely constant within one instance. Usually this is data that simply isn't going to be available at the call site where the algorithm will be triggered. Environmental state is one example. Mode switches are another. Anything more complicated or mutable than this is an indicator that you might in reality have a stateful algorithm on your hands.

Stateful Algorithms

The simple explanation of a stateful algorithm is that it is a bundle of behavior with a limited reliance on internal or external state. It differs from a pure algorithm in that the internal state isn't necessarily directly related to the processing itself. The most common application of a stateful algorithm is a class whose primary purpose is to expose the behavior of a model, while internally managing and obscuring the state complexities of that model. The goal of this being to encapsulate in a service layer all the concerns that the other layers need not worry about explicitly.

IO services are common instances of these types of classes. Some examples include file streams, scanner services, or printer services. Any time a complex API is wrapped so as to expose a simple interface for bringing in or flushing out data in the natural format of the application, a stateful class is the most likely mechanism. Looking at the file stream example, the state involved might consist of a buffer, a handle to the endpoint of the output, and parameters regarding how the data should be accessed (such as the file access sharing mode).

Stateful algorithms may take dependencies on any other type of class, but more commonly pure or stateful algorithms. Stateful algorithms are more likely to interact with data classes and models as arguments, or internally in the course of API wrapping.

Models

Models are a diverse, messy bunch. They're probably the closest to fitting the classical "data plus behavior" description. A class in this category will have intertwined state and behavior. The state will typically be of value without the behavior, but the behavior exists only in the context of the state. Most often, a model class comes about not because extracting the behavior and state into separate classes is impossible, but rather because it muddies the expressiveness of the design. Domain models, UI classes, and device APIs are the places where the model category of classes tend to serve best. Note, however, that these spaces also tend to attract convoluted coupling, inscrutable interfaces, unintuitive layering, and a host of other design pathologies.

These warts proliferate because it is difficult to make generalizations or rules about models. The foremost rule I keep in dealing with them is to try to avoid them. Not in the "model classes are considered harmful" sort of way. If you look at a program I've written, you won't find it devoid of model classes. In fact, they aren't even particularly rare. When I say I try to avoid them, all I really mean is that before I write a model class I make sure that the role I'm trying to fill isn't better addressed by some at least relatively simple combination of the other categories.

It's common for the behavior of a model class to depend on some private aspect of the state. Separating the behavior from the state would thus also mean dividing the state into public and private portions, to be referenced by the consumers of the model, and the model's services, respectively. This can be a nightmare to maintain, and the service implementor must often make a decision between downcasting a public data interface to a compound public/private one, or internally mapping public state objects or interfaces to their associated private state objects or interfaces. You see this type of thing crop up repeatedly in ORM solutions, where the otherwise internal state of the change and identity tracking is pulled out of the model and maintained by the ORM.

This separation is difficult and messy to make. If you find yourself facing this choice, you can be fairly certain that a model solution is at least an acceptable fit for the role. There are benefits to the separation, but often the cost of implementation is high, and should be deferred until necessary. But beware as you forge ahead with the consolidated model, because every model is different. I find that the cleanest way to incorporate a model into a project is to establish a layer in which that model and its service classes will live, and make heavy use of data transfer objects and projections to inter-operate with other layers.

A model can, and often does take dependencies on any category of class. Because of the varied roles models can serve and the many ways that state and behavior may be mixed in them, it's very difficult to identify any patterns or rules as far as how dependencies typically arise and how they should be handled. Any such effort would likely hinge on further subdividing the categories into common roles and basing the rules and patterns at that level instead. I'll leave that as an exercise for the reader, for now. =)

A Guideline

As I've said, I find these categories to be a useful guideline in my own programming efforts. It's quite easy and natural to mix state and behavior together in most every class you write, simply because it's convenient on the spot to have behavior neighboring the state it's relevant to. But, it tends to be far easier to reason about the first three categories in the abstract than it is to do so about models. This is likely because the former tend to enable and encourage immutability, which minimizes and simplifies a whole host of problems from concurrency, to persistence, to testing. For this reason I try to identify places that lend themselves well to the extraction of a data class or an algorithm class.

This strategy seems to make my code more composable, and my applications more discrete in terms of purpose and dependency. Which in turn makes the divisions between the different layers and functionality regions more clear and firm. This not only discourages leakage of concerns, but also makes the design more digestible to the reader. And that is always a good thing, even if the reader is just yourself 6 months later.

Community Service

As I see it, programming questions tend to take one of two forms:
  1. How do I do X?
  2. How do I do X in a clean, maintainable, and elegant way?
Matt Gemell has a great response to questions of the first form. I have nothing to add to his excellent post on that topic.

But I think it's crucially important to make a distinction between the two forms. The second form is a far more difficult question, asked far less commonly, and to which good answers are even rarer. When questions of this form are asked in forums, people are usually given a link to a page describing a pattern, or if they are really "lucky" a page with a sample implementation of that pattern. To be fair, patterns usually are the answer to these types of questions. But what we find written up on the web, or even in books, is most commonly pathetically oversimplified, without context, and often even without guidance on what support patterns are necessary to obtain the benefit or when alternatives may be preferable.

Essentially, most developers are left with no choice but to apply Matt's answer to form 1, to questions of form 2, in a much less information-rich environment. I contend that, while it may be one of those proverbial activities that "build character", it is ultimately more likely to be harmful to their immediate productivity--possibly even to their continued professional growth.

What we end up with is a pandemic of developers trying to hack out pattern implementations, being discouraged by the fact that the pattern seems to have no accounting for any possible deviations or complications in the form of the problem. Worse, developers are often dismayed to find that the one pattern they were told to use is merely the tip of a huge iceberg of support patterns without which the first may actually be more problematic than an ad hoc solution. Most often the developer in this position will end up determining that their deadlines will never allow them to go through the painful trial and error process on every one of these patterns, and accordingly drop back to the ad hoc solution.

It's time that we acknowledge that software development, whether you consider it an engineering discipline, an art, or a craft, has a history--albeit a short one. Things have been tried. Some work, some don't. There do exist "solved problems" in our problem space. To say that every developer should try and fail at all these efforts on their own ignores and devalues the collective experience of our community. Worse, it stunts the growth of the software development industry as a whole.

Yes, one can learn these things by trial and error. Yes, the understanding gained in this way is deeper and stronger than that gained initially by being tutored on how to apply the solution. And yes, there's a certain pride that comes with getting things done in this way. But this is not scalable. Putting each person forcibly through this crucible is not a sustainable strategy for creating experienced, productive, wise programmers. Every hour spent grappling in isolation with a question of "why won't this pattern do what I've been told it will" is an hour that could be spent creating functionality, or heaven forbid solving a new problem.

That is why those of us who have managed to obtain understanding of these "solved problems" must be willing to shoulder the responsibility of mentoring the less experienced. Being willing to explain, willing to discuss specifics, mutations, deviations, exceptions. Willing to discus process, mindset, methodology. These are the things that make the distinction between a programmer and a software developer, between a software developer and a software engineer.

The internet may very well not be the appropriate place to seek or provide this type of mentoring. I suspect it's not. And unfortunately, there are too many development teams out there buried in companies whose core competencies are not software, and consisting solely of these discouraged developers, lacking an experienced anchor, or even a compass. There are online communities that attempt to address the problem at least partially. ALT.NET is one such, for .NET technologies. But there really is no substitution for direct, personal mentorship.

So I would encourage young developers out there to seek out more experienced developers, and ask them the tough, form 2 questions. And I would even more strongly encourage experienced developers to keep a watchful eye for those in need of such guidance. Maybe even consider proactively forming a local group and seeking out recruits. Be willing, able, and happy to provide this guidance, because it benefits all of us. Every developer you aid is one less developer creating an ad hoc solution which you or I will be condemned to maintain, overhaul, or triage somewhere down the line.

Incidental Redundancy

A note: I originally composed this blog post in June 2008, but lost it in my backlog and it never got posted. In the interim, Robert C. "Uncle Bob" Martin has addressed the issue in his Clean Code Tip of the Week #1. He describes the issue and the dilemma far more concisely than I have here, and even provides a strategy for dealing with it. By all means feel free to skip my post here and consider him the "official source" on this topic. =)

I am a big fan of "lazy programming". By that of course I mean avoiding repetition and tedium wherever possible. I mean, that's what most programming is really about, right? You have a problem: some process that is annoying, unwieldly, or even impossible to perform with existing tools. And the solution is to write a program that takes care of all the undesirable parts of that process automatically, based on a minimized set of user input or configuration data.

The realities of programming on an actual computer rather than in an idealized theoretical environment rarely allow this ascending staircase of productivity to be climbed to the top step. But it is nevertheless a goal we should always strive for.

Hence Jeff Atwood's recent post about eliminating redundancies.

Anything that removes redundancy from our code should be aggressively pursued
An honorable goal, to be sure. But no rule is without exception, right?

I humbly submit to you the concept of "incidental redundancy". Incidental redundancy is a repetition of code syntax or semantics that tempts the programmer to refactor, but if carried out the refactoring could damage the elegance and discoverability of the program.

The difference between incidental redundancy and regular redundancy in code is that the redundancy does not arise because of any substantive, or at least relevant, similarity between the two problems in question. Here are two ways I can think of off the top of my head for this to happen:

  1. The solutions you have come up with to this point for each situation just happen to have taken a similar form. Given a different creative whim, or even just an alternative refactoring, the commonality may never have arisen.
  2. The problems are, in fact, similar at some level. But the level at which they are similar is far above or below the level where you are working, and so not truly relevant or helpful to the immediate problems you are trying to solve.
The first situation should be acceptable enough to anyone who has spent a decent amount of time programming. Sooner or later, you will spend some precious time carefully crafting a solution to a problem only to later discover that a year and a half previous, you had solved the same exact problem, in a completely different and maybe even better way.

The second situation may sound a little incredible. But allow me to point out an example from the world of physics. Please bear with me as this is an area of personal interest, and it really is the best example that comes to mind.

There are four forces in the known universe which govern all interactions of matter and energy, at least one of which I'm sure you've heard of: the electromagnetic, weak nuclear, strong nuclear, and gravitational forces. It is known now that the first two of those apparently very different forces are in fact two different aspects of the same phenomenon (the electroweak force), which only show up as different when ambient temperature is comparatively low. Most physicists are pretty sure that the third force is yet another aspect of that same phenomenon that splits off only at even higher temperatures. And it is suspected that gravity can be unified with the rest at temperatures higher still.

The point of all this is that there are four different phenomena which in fact bear undeniable similarities in certain respects, and these similarities continue to drive scientists to create a generalized theory that can explain it all. But no one in his right mind would try to design, for example, a complex electrical circuit based entirely on the generalized theory of the electroweak force.

The analogy to programming is that, were we to try to shift up to that higher level and formulate an abstraction to remove the redundancy, the effect on the problem at hand would be to make the solution unwieldly, or opaque, or verbose, or any number of other undesirable code smells. All at the cost of removing a little redundancy.

What we have in both physics and in our programming dilemma, is noise. We are straining our eyes in the dark to find patterns in the problem space, and our mind tells us we see shapes. For the moment they appear clear and inevitable, but in the fullness of time they will prove to have been illusions. But making things worse, the shadow you think you see may really exist, it's just not what you thought it was. That's the insidious nature of this noise: what you see may in fact be truth in a grander context. But in our immediate situation, it is irrelevant and problematic.

This concept is admittedly inspired heavily, though indirectly, by Raganwald's concept of "incidental complexity", where a solution takes a cumbersome form because of the act of projecting it upon the surface of a particular programming language, not unlike the way the picture from a digital projector becomes deformed if you point it at the corner of a room.

The real and serious implication of this is that, to put it in Raganwald's terms, if you refactor an incidental redundancy, the message your solution ends up sending to other programmers, and to yourself in the future, ceases to be useful in understanding the problem that is being solved. It starts sending a signal that there is a real and important correlation between the two things that you've just bound up in one generalization. When in fact, it's just chance. And so when people start to build on top of that generalization with those not quite correct assumptions, unnecessary complexities can quite easily creep in. And of course that inevitably impacts maintenance and further development.

Noise is ever-present in the world of programming, as in other creative and engineering disciplines. But it doesn't just come from the intrusive environment of our programming language or our tools as Raganwald pointed out. It can come from our own past experience, and even come from the problem itself.

So be wary. Don't submit unwittingly to the siren song of one more redundancy elimination. Think critically before you click that "refactor" button. Because eliminating an incidental redundancy is like buying those "as seen on tv" doodads to make your life "easier" in some way that was somehow never an issue before. You think it's going to streamline some portion of your life, like roasting garlic or cooking pastries. But in your quest to squeeze a few more drops of efficiency out of the tiny percentage of your time that you spend in these activities, you end up out some cash, and the proud owner of a toaster oven that won't cook anything flat, or yet another muffin pan, and a bunch of common cooking equipment that you probably already own.

So remember, redundancy should always be eliminated, except when it shouldn't. And when it shouldn't is when it is just noise bubbling up from your own mind, or from the problem space itself.

Decoupling Domain Model from Persistence

For the last several months, I've been working on a Windows desktop application. This application has a number of pretty common aspects: file manipulation, local data repository, GUI, user settings, collection/dataset manipulation (think drag-and-drop listview type stuff), communication with peripheral devices, and communication with remote services (such as a web service). I think just about every desktop application has a good cross-section of these aspects, and maybe a few others that are slipping my mind at the moment. As a result, there are patterns of application architecture that will arise, and which the development efforts of well-designed, robust applications will have in common. I'm not talking about the typical "design patterns" here à la Gang of Four (GoF), but rather "application architecture patterns". Big patterns. A set of 4 or 5 patterns that, when meshed together, encompass the big, meaty chunks accounting for 95% or better of your application.

I know this is true. I know these patterns exist. But as this is my first effort in developing a desktop app of this complexity, I don't really know what the patterns are. One that GoF did account for was MVC/MVP, and sure I know those... But honestly, up until recently (and maybe even still) my knowledge of that pattern was very hazy and academic. Actually implementing it was something I'd never had to do before. It took a lot of blood, sweat, and tears--lots of trial and error--to get to a point where I really feel like I'm starting to grasp the "right" way to put together an MVP structure.

So now I've moved on to persistence. I have my domain model. But I need to be able to persist the objects in my domain model to and from disk (not a database!). I've been struggling to find ways to add in this functionality without tainting the domain model with persistence logic. Granted, this is a valid way of doing things, as is evidenced by Martin Fowler's ActiveRecord pattern. But it really doesn't make for ease of unit testing. Your business logic and persistence logic get all mixed up in the same classes and you can't easily isolate one from the other for testing purposes.

Then I saw Fowler's Repository pattern. I thought, okay, maybe that will help. I can create an interface that I can implement which will take care of all the logic for disk access to all the right places, based on what objects or collections are being retrieved. And I can just mock the interface for purposes of testing the domain model and business logic. I have a simple three-level hierarchy of domain interfaces, so it would be fairly simple to implement all the querying and CRUD I need for each domain class on the repository interface. But then I thought about how we plan to have multiple implementations of these interfaces in the future, with different permutations of under-the-covers data to be persisted, and thought maybe it would be best to have the repository be metadata-driven, rather than hard-coding the query logic for the different domain classes. This of course naturally leads to a record-based design, and that would mean I'd want data mappers to translate from my domain classes to repository records. And of course, the repository class should be strictly independent of the persistence mechanism itself (in this case the disk) so that I can unit test the repository logic without worrying about maintaining a test repository just for that purpose.

And suddenly I realized that I had mentally re-created ADO .NET. DataSets and DataTables are the repository mechanism, (with typed datasets and the ADO designer even supporting generating data mappers for you). DataAdapters are the persistence service.

So where this leaves me is with the question of whether I should just design my on-disk storage schema, build myself an app-specific data adapter mechanism, and let ADO .NET do all the work. If I really wanted to minimize the amount of mapping and repository-access code I have to write, I might even be able to use the Entity Framework to do my dirty work for me. Admittedly I have no idea how much work it would take to implement my own data adapter. The interfaces seem straightforward enough, but I have a nagging feeling that's really just the tip of the iceberg.

After all my radio silence here, I don't know if anyone is still listening... But if anyone has any thoughts on the wisdom of this approach, I'm listening intently.

Don't Give Up Assembly Privacy For Sake of Unit Testing

Just a little PSA to other .NET unit testing newbs out there like me. This info is available various places on the internet, but you'll be lucky to find it without just the right search terms. So hopefully adding another blog post to the mix will make it easier to stumble on.


Unit testing frameworks need to instantiate your types, in order to run unit tests. What this means is that they need to be able to see them. The easiest way to do this is to make your classes public and/or place the tests right inside your project.


Placing the tests right in your project means that you'll very likely have to distribute the unit test framework assemblies along with your product. This might give more information to potential hackers than you would like. And making your classes public brings with it the often undesirable side-effect of opening up essentially all the types in your assembly to be used by anyone who knows where the assembly file is, in essentially any way they like.


But you don't have to make these concessions. You can move the tests out of your assembly, to keep them out of the deployment package, and still keep your classes internal (though not private). The .NET framework allows an assembly to declare "friend" assemblies that are allowed to see its internal classes. (Yes, very similar to the old C++ friend keyword). This is accomplished by adding an assembly attribute called InternalsVisibleTo to your AssemblyInfo.cs file.


If your unit test project does not have a strong name, it's as simple as referencing its assembly name:


[assembly: InternalsVisibleTo("MyCoolApp.UnitTests")]

However, I strongly recommend giving your unit test assembly a strong name. A strong name is a name involving a public-private key pair, and which is used by the .NET framework along with some hashing and encryption technology to prevent other people from creating assemblies that can masquerade as your own. Furthermore, if you give your app itself a strong name (which you should if you plan to distribute it), any libraries it references will need strong names, including the ones it just allows to see its internals.


So, if you decide to give your unit test project a strong name, you'll need the public key (not just the token) as well:


[assembly: InternalsVisibleTo("MyCoolApp.UnitTests, PublicKey={Replace this, including curly braces, with the public key}")]



(If you need to learn about strong names, and/or how to extract the public key from your assembly, this is a good place to start: http://msdn.microsoft.com/en-us/library/wd40t7ad.aspx.)


Once you've done this, you should be able to compile and run your unit tests from a separate project or even solution, and still keep the classes you're testing from being "public".


This is all well and good, but if you're working with mocks at all, you probably have another problem on your hands. The most popular .NET mock frameworks (e.g. RhinoMocks, Moq, and NMock) use the Castle Project's DynamicProxy library to create proxies for your types on the fly at runtime. Unfortunately, this means that the Castle DynamicProxy library ALSO needs to be able to reference your internal types. So you might end up with an error message like this:


'DynamicProxyGenAssembly2, Version=0.0.0.0, Culture=neutral, PublicKeyToken=a621a9e7e5c32e69' is attempting to implement an inaccessible interface.

Complicating this fact is that the Castle DynamicProxy library places the proxies it generates into a temporary assembly, which you can't just run the strong name tool against, because the temporary assembly doesn't exist as a stand-alone file. Fortunately, there are programmatic ways of extracting this information, and the work has been done for us. The public key for this assembly, at the time of this writing, has been made available, here and here. You might find some code at those links that could help you extract the public key from any future releases of Castle as well.


The important information is basically that, as of today, you need to add this to your AssemblyInfo.cs file, without line breaks:


[assembly: InternalsVisibleTo("DynamicProxyGenAssembly2, PublicKey=002400000480000094000000060200000024000052534131
0004000001000100c547cac37abd99c8db225ef2f6c8a360
2f3b3606cc9891605d02baa56104f4cfc0734aa39b93bf78
52f7d9266654753cc297e7d2edfe0bac1cdcf9f717241550
e0a7b191195b7667bb4f64bcb8e2121380fd1d9d46ad2d92
d2d15605093924cceaf74c4861eff62abf69b9291ed0a340
e113be11e6a7d3113e92484cf7045cc7"
)]

Caution: The name and public key of this temporary assembly was different in earlier versions, and could change again in later versions, but at least now you know a little more about what to look for, should it change.


So remember: You don't have to open wide your assembly to just anyone who wants to reference your types, just for sake of unit testing. It takes a bit of work, but you don't need to compromise.

Surviving WinForms Databinding

I've rarely had the freedom in my career to implement a Windows application using MVC-style separation of concerns. Generally I am told "just get it working". Now, if I already knew how to tier out an application, this wouldn't be a problem. But since I don't, and it would take me a good deal of time to figure out a satisfactory way of doing it, I haven't been able to justify spending the time, on the company dime.


But recently, I've been fortunate enough to work on a new software product without a hard ship-date, and having other obligations on my plate. This has given me the freedom to not spend every minute producing functionality. So I've been experimenting with implementing MVC with Windows Forms.


There are essentially two ways to get this done. You can:

  1. Write a bunch of manual event code to trigger view changes from model changes, and vice versa.
  2. Use databinding and save yourself the explicit event code.

This works great if all your datasources are DataTables, or DataViews, or other such framework classes that are designed from the ground up to work with WinForms databinding. But should you have the misfortune of not dealing with any record-based data, you'll find you have a much tougher road to walk.


If you, like I, choose option 2, and you happen to be working on anything more complicated than a hello world application, and you are truly committed to doing MVC both correctly, and with as little unnecessary code as possible, then you will undoubtedly spend, like I have, a lot of time banging your head against a brick wall, trying to figure out why your databinding isn't doing what it is supposed to. This is a terrible shame, because if databinding in WinForms worked properly and was easy to use, it would be a spectacular tool for saving time and shrinking your codebase.


Truth is, you can still same time and code. But not as much as you might think when first introduced to the promise of databinding. If you can find any decent information on the obstacles you'll encounter. Compounding the above hurdles is the fact that what information can be found online about them is scattered to the four winds, and no one bit references the rest. So, I've decided to blog the headaches I encounter, and my resolutions, as I find them. This should increase the searchability of the issues at least a bit, by tying together the separate references with my blog as a link between them, and also by containing the different bits of information within one website. Or it would, if I had readers...


My results so far have produced 5 rules, to help you preserve your sanity while using WinForms databinding.


Rule 1: Use the Binding.Format and Binding.Parse events.


This first rule isn't actually too hard to find information on. Format and Parse essentially let you bind to a datasource property that doesn't have the same data type as the property on the control. So you can bind a Currency object to a TextBox.Text property, for example.


The Format event will let you convert from your datasource property type to your control property type, and Parse will let you convert from your control property type to your datasource property type. The MSDN examples at the links above are pretty good. If you use them as a template, you won't go wrong. But if you start to switch things up, beware Rule 2...


Rule 2: If you use Format or Parse events, DO NOT add the Binding to the control till after register with the events.


I honestly don't know what the deal is with this one. I just know that if you add your events to your Binding object after you've already passed it to the Control.DataBindings.Add function, they won't get called. I don't know why this should be, unless the control only gets a copy of your actual Binding object, not a reference to it.


Unfortunately, I have lost the references I had to the forum posts that talked about this. There were several, and now I can find none of them. I know I saw them, though, and I saw the symptoms of the other ordering, so as for me, I'm going to make sure to follow this rule.


Rule 3: Use INotifyPropertyChanged and/or INotifyPropertyChanging.


I ran across this info in a post on Rick Strahl's blog. I ran across this information during a desparate scramble to find out why my datasource ceased to be updated by user actions on the controls, after the initial binding occurred. The INotifyPropertyChanged interface is intended to be implemented by a datasource class that has properties that will be bound to. It provides a public event called PropertyChanged, which is registered with the data binding mechanism when you add the Binding to your Control. Your class then calls this event delegate in the desired property setters, after the new property value has been set. Make sure to pass "this" as the sender, and the name of the property in the event arguments object. Notice that the property name is provided as a String, which means that there is reflection involved. This will become relevant in Rule 4. Also note that there is an INotifyPropertyChanging interface, which exposes an event that is meant to be raised immediately before you apply the datasource change. This is generally less useful for databinding, but I include it here to save some poor soul the type of frustration I have recently endured.


Rule 4: If you implement INotifyPropertyChanged, don't include any explicit property change events ending with "Changed".


As I mentioned, the databinding mechanism uses reflection. And in so doing, it manages to outsmart itself. There is a very good chance you're going to run into a situation for which these databinding mechanisms aren't useful, and you'll have to implement your own explicit property change events on your datasource class. And of course, you're going to name these events in the style of "NameChanged", "AddressChanged", "HairColorChanged", etc. However, the binding mechanism things it's smart, and rather than just registering the INotifyPropertyChanged.PropertyChanged method, it will also register with any public event whose name ends with "Changed". And if you didn't happen to make your event follow the standard framework event signature pattern--that is, void delegate(Object sender, EventArgs e)--then you will get errors when the initial binding is attempted, as the mechanism attempts to register it's own standard-style delegates with your custom events, and you get a casting error.


I solved this one by following a crazy whim, but I also tried to verify the information online. All I could find was one old post buried in an obscure forum somewhere.


Rule 5: Don't bind to clickable Radio Buttons


I know how great it would be if you could just bind your bunch of radio buttons to an enum property. I really do. You think you're just going to hook up some Format and Parse events to translate back to your enum, and all will be well. It would be so darn convenient, if it actually worked. But WinForms just isn't cut out for this. For 3 full releases now (or is it 3.5 releases?), this has been the case. It's because of the event order, which is not something that MS can go switching up without causing thousands of developers to get really cheesed off.


The problem really comes down to the fact that unlike other controls' data properties, the Checked property of a radio button doesn't actually change until focus leaves the radio button. And as with all WinForms controls the focus doesn't actually leave the radio button until after focus is given to another control, and in fact not until after the Click event of the newly focused control has fired. The result of this, as it pertains to radio buttons, is that if you try to bind to them, the bound properties in your datasource will actually lag your radio buttons' visual state by one click. If you have just two radio buttons, the datasource will be exactly opposite the visible state, until you click somewhere else that doesn't trigger an action that references those datasource properties. Which can make this a really infuriating bug to track down. I almost thought I was hallucinating.


Now, in all honesty, it's possible to make it work. But it is the kludgiest kludge that ever kludged. Okay maybe it's not that bad... but it's a messy hack for sure. It takes a lot of work for something that really should already be available. As near as I can tell, the only way to solve this problem without giving up the databinding mechanism is to essentially make your own RadioButton control, with a property change and event order that is actually useful. You can either write one from scratch, or sub-class RadioButton and override all the event logic with custom message handling.


So....


There's the result of 3 weeks of frustration. I hope it helps someone else out there someday. I'll make list addendum posts if/when I come across any other mind-boggling flaws in the WinForms databinding model. And in the meantime, I welcome any additions or corrections that anyone is willing to contribute in the comments.

What is a Senior Programmer?

My friend and co-worker Nate recently wrote about some hurdles he has encountered in pursuing his professional ambitions as a software developer. I know what he's going through because I entered both my internship and my first post-college job with tragic misconceptions (non-conceptions really, in the case of my internship) as to how my career would develop.

My first mistake was that, as I began my internship, I had no idea how my career would progress, how it should progress, or how active a participant I would be in whatever progression did occur. I knew that I enjoyed twiddling around with code, and that I seemed to have more of a natural talent for it than most of my classmates. And I figured that if I was going to make a career out of anything, it should be something that I did at least passingly well, and that I enjoyed.

As I entered my first post-college job, I had decided I most definitely did not want to become a manager. I enjoyed programming too much to give it up, for one thing. Further, managers had so far stood mostly as obstacles to my involvement in interesting work, and as incriminating figures more prepared to remonstrate me for my professional flaws rather than empower me to better myself as a programmer. So I wanted nothing to do with that. Instead I set a near-term goal of becoming a "senior developer", at which point I would re-evaluate my career trajectory and adjust course if necessary.

An important question in gauging the advancement of your career as a programmer is to ask what exactly it means to be a senior programmer. I have come to see this title as being tied to the professional respect that one has accumulated in a programming career. A senior programmer is not someone who has served a certain amount of time "in the trenches". Nor is it even someone with a broad experience base.

No, I think there is something a bit more intangible, that identifies someone deserving of the "senior developer" title. Something less measurable. Something that is probably sensed, but not necessarily explicable by those with less experience. But something that would be conspicuous by its absence.

As I see it, what distinguishes someone deserving of the "senior developer" title is a sense of stability. A senior developer is someone who can stand in the middle of the chaos that arises in a project, and exert a calming influence on the people and efforts swirling around him. A senior developer is an anchor to which a project can be tied to keep it from drifting into dangerous waters. He is a sounding board against which claims will ring true or false and goals will ring possible or impossible. He is the steady hand, not necessarily on the rudder of your project's ship, but wherever that hand is needed most. And he is a strong voice that reliably calls out your bearing relative to your destination.

Of course, these metaphors sound a bit grandiose. But the general picture I think is accurate. A senior developer is someone that you put on a project to ensure there's some measure of certainty in the mix. It doesn't mean your project is guaranteed to succeed. But it should mean that you can sleep a little easier knowing that where you think the project is, is where it really is. And that if it's not where it needs to be, that there is someone involved who has a decent idea of how to get it there.

Naturally, these things do come with time and experience. So what I said earlier isn't completely true, in that a senior developer is someone with tenure and experience. However, these are necessary, but not sufficient conditions. Not everyone with 20 years of experience on 5 platforms and 15 languages qualifies. And not everyone with 5 years of experience on 1 platform and 2 languages doesn't qualify. Rather, if you show that you can learn from experiences both positive and negative, port technical and non-technical knowledge from one domain to another, and educate, inspire, or empower colleagues and junior developers.... Then you are showing yourself to have what it takes.

If you're like me, and Nate, you don't feel that you're there yet, but you hope one day to proudly contribute this kind of value. Don't lose hope. Every new experience, technical and non-technical, is a growth opportunity. But it is important to broaden your horizons in both of those respects. If you hope to educate, inspire, and empower others, you must first learn to do so for yourself. And if there's one thing I've learned, it's that you can't do that if you feel stagnant. In that case, your first responsibility to yourself is to educate your boss of the professional value you could offer with a little broader exposure. And if that doesn't work, address the issue yourself, dedicating a bit of time outside work. If you can find them, working together with some like-minded friends or co-workers can be very encouraging, like what we have done with our "book club". Remember, nothing changes if things just stay the same.

LINQ: Not just for queries anymore

Take a look at this LINQ raytracer, discovered via Scott Hanselman's latest source code exploration post.

For those of you who don't know what the heck LINQ is, it was created as a language extension for .NET to allow querying of datasets using an in-place SQL-inspired notation. This was a huge step up from maintaining separate bodies of stored queries, or hard-coding and dynamically assembling queries as strings in your source code.

Lately LINQ has been enhanced and expanded upon even further to become a more broadly usable miniature functional/declarative language within .NET. This is impressively illustrated by the raytracer code. The author disclaims that it's probably not the best way to write it, which is probably true. But this in no way detracts from its illustration of the power, expressiveness, and flexibility of LINQ.

I love it! It's a great example of taking a deceptively simple language and showing its power by doing things with it that aren't strictly within the purview of its design. It reminds me of some SQL code that I've written for my current employer, to get functionality out of inefficient PL/SQL and into blazing-fast pure SQL. Stuff that it was said couldn't be done in pure SQL. Of course, this is usually said by people who think that SQL is a second-class citizen among languages, not realizing the power of its declarative foundations. This is something that I hope to write about more extensively in the future, either here or in a new SQL blog I'm considering starting with a friend and coworker.

Anyway, I'm tempted to say that there's a bit of a down side in that this LINQ raytracer is mostly comprised of LETs, which feel more imperative than declarative. However, I've long claimed that just such functionality in actual SQL would make it a tremendously more compact and elegant language, so I won't complain too much about that. =)

My gut reaction upon seeing this code is that it feels like a crazy hybrid of LISP and SQL, but it's written in C#. Which all makes my head spin, but in a good way. I love that C# is becoming such a great hybrid of procedural and functional programming paradigms.

Where will your programming job be in 7 years?

There's been a lot of noise recently about the future of programming, as a profession. Okay, let's be honest, people have been talking about the imminent programming crash for a long time. But they didn't know what they were talking about. When I graduated college a few years back, we were warned that many of us would not find jobs, but this fear turned out to be overblown.

But now the noise is coming from some different directions. It's not analysts, or people who've been layed off, or the people who got screwed in the "dot.com bust". Though to be fair, the analysts are still saying it, louder than ever. But it's also now coming from the people who hire programmers. And unfortunately, if you are a programmer, those are the people you care about. So, what's different this time? Why don't companies need programmers anymore?

Well, they do need programmers. But they need programmers who can do more than just program. So odds are good that if your job title is "programmer", or "programming" constitutes 90% or better of your responsibilities on the job, they're looking at you like day-old tuna salad.

From the ComputerWorld article:
"It's not that you don't need technical skills, but there's much more of a need for the business skills, the more rounded skills"
"Crap," you say to yourself. "I hate business stuff." Or maybe you're saying "Crap. Why didn't I go for that business minor or second major back in school?" Easy, cougar. Don't get too worked up yet. Let's look at the context of this statement. We already know who's saying it: "business people". People working in IT in companies throughout the nation. Not software companies. Not consulting companies. Just regular companies. This is crucial information. It means the positions we are talking about are mostly going to be be "business programming" jobs.

Now we have a few more questions. Where are the jobs going? Why did I put quotes around "business programming"? And why are these jobs going away for real this time?

First answer: they're going to go to people with Associate degrees, and people who are self-taught, and consultants. Some to high-quality on-shore resources, but an awful lot to people who speak Hindi or Urdu (or Chinese, or Korean). Wages and education requirements will be cut for in-house employees, and others jobs are going to consultants. Offshore in most cases.

Next... The reason I put quotes around "business programming" is to distinguish these jobs from other types of programming positions. If you are a "business programmer", you're part of what is currently a pretty huge job market. You're part of a population that does a lot of work for a lot of companies, but is often looked at as an unfortunate necessity by other people at these companies. "Business programming" is the type of programming for which, in the past, a company who sells shoes, would hire some people on full-time and in-house to do. It's order processing, financial reporting, CRM, network management, database scripting, and so on and so forth. And for a long time, management types have had an uneasy feeling that they were getting kind of a raw deal on these guys. And I hate to break it to any business programmers out there.... but they were right.

According to the Occupational Outlook Handbook entry for computer programmers, there are very specific and very real reasons that programming jobs are being phased out.
"Sophisticated computer software now has the capability to write basic code, eliminating the need for many programmers to do this routine work. The consolidation and centralization of systems and applications, developments in packaged software, advances in programming languages and tools, and the growing ability of users to design, write, and implement more of their own programs mean that more of the programming functions can be transferred from programmers to other types of information workers, such as computer software engineers.

Another factor limiting growth in employment is the outsourcing of these jobs to other countries. Computer programmers can perform their job function from anywhere in the world and can digitally transmit their programs to any location via e-mail. Programmers are at a much higher risk of having their jobs outsourced abroad than are workers involved in more complex and sophisticated information technology functions, such as software engineering, because computer programming has become an international language, requiring little localized or specialized knowledge. Additionally, the work of computer programmers can be routinized, once knowledge of a particular programming language is mastered."
I would add to this list that "business programming" is almost inherently redundant. Every company out there that employs in-house developers is reinventing the wheel. 99% of the problems their programmers are solving have been solved by thousands of other programmers at other companies around the country. When I look at it that way, it feels like such a tremendous waste of money and time. These programmers could be working on real problems like true AI, building Skynet, or bringing about the rise of the machines and their subsequent domination over the human race.

So essentially, the big reason is that there is finally a way for a company to separate their technical needs from their business needs. Packaged software has finally come to a point where it solves the general problems, while still providing a minimum amount of flexibility necessary to handle a company's critical peculiarities. When they have a need that isn't fulfilled by some packaged solution out there, contracting resources have become plenteous and cheap enough to fill that need. The company can move the business knowledge into more generic manager roles that are more useful to the company. (Roles with better pay, and job titles such as "software engineer" and "system analyst", and "project manager".) And the technical knowledge can be moved mostly out of the company into a "unpluggable" resource that they only need to pay as long as the work is actually being performed.

So, what's a programmer to do?

Well, first of all, stop being just a programmer. Don't be a code monkey. Yes, the under-appreciated coder-geeks of the world have "owned" that pejorative term. But in the eyes of corporate America, the words "code monkey" are always preceded by the words "just a". Code monkeys abound. They are replaceable. If you don't evolve (pun intended) you're going to end up obsolete when your company finds an offshore contractor that doesn't suck. (Yes, they do exist!)

One way you can evolve is by taking the advice of all the articles and reports I've linked to in this post. You can take courses, get certifications, etc., and become a "software engineer" or a "system analyst" or a "project manager". The numbers show there are plenty of companies out there that will be willing to pay you to do this as long as you don't suck at it. (And some that will even if you do suck. But I strongly advise against sucking.)

Many programmers go this route at some point. And there's no shame in it, if you can let go of coding every day (or at all) as part of your job and not hate yourself for it. I think I might go that route one day, but I'm not ready for it yet. For one thing I don't feel my experience base is either broad or deep enough to be as effective as I would want to be. But also, there are just too many cool languages and libraries out there. Too many programs that haven't been written yet. Too many problems that haven't been solved.

So what is there for me, and those like me? Those who don't want to give up programming as the core of our job, but don't want to end up teaching programming to high school kids by day while we sate our code-lust at night in thankless open source efforts? (No not all open source is thankless. But there are a lot of projects that go nowhere and just end up abandoned.)

Two answers:
  1. Consulting. Not everyone is willing to send the work around the world, dealing with language barriers, time-zone difficulties, and security uncertainties, to get it done. There's a definite place in the market for on-shore, face-to-face consulting. This spot is getting tighter though, so you had better make sure you're a top-shelf product.
  2. Software companies. The companies that sell the software that's putting code monkeys out of jobs are still raking it in. And they're actually solving new problems, building new technologies, etc. All the exciting stuff I dream of working on as I churn out yet another database script for a client.
These are your new targets. The sooner you make the jump the better. Not only will you be trading up, as far as job satisfaction (and probably pay), but you'll also be contributing your own small part to the further expansion of software technology. Both by actually working on it, and by condemning the tremendously redundant field of "business programming" to the death it has deserved for so long. You'll probably still have to learn that business and management stuff, but you can also probably avoid it taking over your job.