Don't Stop at DRY

I've been thinking a lot about the DRY principle lately. You know the one. DRY: Don't Repeat Yourself. It's a principle that was first made popular in the book "The Pragmatic Programmer". Unlike much of the rest of the content of the book, DRY is something that appears to have penetrated fairly deeply into the collective consciousness of the greater programming community. In a certain respect, this is a huge success on the part of Andy Hunt and David Thomas. To have disseminated so completely such a fundamental rule of programming is surely something to be proud of.

Note what I just said. DRY is "a fundamental rule of programming." I suspect very few people with appreciable experience would reasonably and honestly dispute this claim. But a rule is a funny thing, and a fundamental one doubly so. Rules are a first-pass at quality. They establish a minimum level of ensured quality by being a reasonable course of action in the vast majority of cases, at the expense of being a sub-optimal course of action in many cases, and even a plain bad option in some.

I am by no means saying that DRY is a bad thing. I'm not even saying not to do it. But I am saying that applying DRY is a beginner's first step toward good design. It's only the beginning of the road, not the end.

Eliminating repetition and redundancy is a very natural and obvious use for programming. It's especially hard to deny for so many people who slide into the profession by way of automating tedious and repetitious manual labor. It just makes sense. So when you tell a new programmer Don't Repeat Yourself, they embrace it whole-heartedly. Here is something that a) they believe in, b) they are good at, and c) will automatically achieve a better design. You'd better believe they are going to pick up that banner and wave it high and proud.

This is a good thing. If you can get your team to buy in to DRY, you know you'll never have to worry about updating the button ordering on that dialog box only to find that it hasn't changed in 3 scenarios because they're using a near identical copy in another namespace. You know that you'll never have to deal with the fallout of updating the logic that wraps up entered data for storage to the database and finding inconsistent records a week later because an alternate form had repeated the exact same logic.

What you might run into, however, is:
  1. A method for rendering currencies as user-friendly text which is also used to format values for XML serialization. The goal being to keep all text serialization centralized regardless whether it's meant for display or storage.
  2. Views, controllers, configuration, and services all using the same data objects, regardless of the dramatically different projections necessary for every operation. This in the interest of avoiding redundant structures.
  3. You might find that your views, controllers, and persistence layers all depend directly on a single tax calculation class. Of course here the goal is very simply to centralize the business logic, but actively works against establishing a proper layering in the application.
These are all very sub-optimal, or even bad, design decisions. They are all examples that I have seen with my own eyes of decisions made in the name of DRY. But DRY itself is not the cause. The problem is that the people responsible for these decisions have unknowingly pitted DRY against other quality design rules and principles.

DRY focuses on behavior. That is to say algorithms. "Algorithms", along with "data", are the fundamental building blocks of applications. They are the substance our work is made of. But trying to build complex software while thinking only or primarily at this elementary level is like trying to map a forest one tree at a time.

At some point, the programmer must graduate from the rule of DRY to the nuance of responsibility. The Single Responsibility Principle (SRP) is the cardinal member of the SOLID family. It's premier position is not simply a coincidence of the acronym. It's also the next most important rule to layer onto your understanding of good design.

Responsibility is about more than function. It's also about context. It's about form and purpose. Identifying and isolating responsibilities allows you to take your common DRY'ed functionality, and cast it through various filters and projections to make it naturally and frictionlessly useful in the different places where it is needed. In a very strict sense, you've repeated some functionality, but you've also specialized it and eliminated an impedance mismatch.

Properly establishing and delimiting responsibilities will allow you to:
  1. Take that currency rendering logic and recognize that readability and storage needs present dramatically different contexts with different needs.
  2. See that the problem solved by a data structure is rarely as simple as just bundling data together. It also includes exposing that data in a form convenient to usage, which may vary dramatically from site to site.
  3. Recognize that calculation is worth keeping DRY, but so is the responsibility to trigger such calculation. It can be both triggered and performed in the business layer, and the result projected along with contextual related business data to wherever it needs to be.
By layering responsibility resolution on top of the utilitarianism of DRY, you take a step back from the trees and can begin to manage a slightly wider view of the problem and solution spaces. This is the key and crucial lesson that all beginning programmers must learn, once they've mastered the art of DRY. Once again, DRY is fundamental and indispensable. It's one step along the path to wisdom and, in general, should not be skipped or omitted. But it's not an end in itself. So don't stop when you get DRY. Start looking for the next step. And consider responsibility resolution as a strong candidate.

Retrospective on a Week of Test-First Development

Any programmer who is patient enough to listen has heard me evangelizing the virtues of Test-Driven Design. That is, designing your application, your classes, your interface, for testability. Designing for testability unsurprisingly yields code which can very easily have tests hung onto it. But going beyond that, it drives your code to a better overall design. Put simply, this is because testing places the very same demands on your code as does incremental change.

You likely already have an opinion on whether that is correct or not. In which case, I'm either preaching to the choir, or to a brick wall. I'll let you decide which echo chamber you'd rather be in, but if you don't mind hanging out in the pro-testability room for a while, then read on.

Last week I began a new job. I joined a software development lab that follows an agile process, and places an emphasis on testability and continuous improvement. The lead architect on our development team has encouraged everyone to develop ideally in a test-first manner, but I'm not sure how many have taken him up on that challenge. I've always wondered how well it actually works in practice, and honestly, I've always been a bit skeptical of the benefits. So I decided this big change of environment was the perfect opportunity to give it a shot.

After a week of test-first development, here are the most significant observations:
  1. Progress feels slower.
  2. My classes have turned out smaller, and there are more of them.
  3. My interfaces and public class surfaces are much simpler and more straightforward.
  4. My test have turned out shorter and simpler, and there are more of them.
  5. I spent a measurable amount of time debugging my tests, but a negligible amount of time debugging the subject classes.
  6. I've never been so confident before that everything works as it is supposed to.

Let's break these out and look at them in detail.

1. Progress feels slower.

This is the thing I worried most about. Writing tests has always been an exercise in patience, in the past. Writing a test after writing the subject means sitting there and trying to think about all the ways that what you just wrote could break, and then writing tests for all of them. Each tests include varying amounts of setup, and dependency mocking. And mocking can be tough, even when your classes are designed with isolation in mind.

The reality this week is that yes, from day to day, hour to hour, I am writing less application code. But I am re-writing code less. I am fixing code less. I am redesigning code less. While I'm writing less code, it feels like each line that I do write is more impactful and more resilient. This leads very well into...

2. & 3. My classes have turned out smaller, and there are more of them.
My interfaces and public class surfaces are much simpler and more straightforward.

The next-biggest worry I had was that in service of testability, my classes would become anemic or insipid. I thought there was a chance that my classes would end up so puny and of so little presence and substance that it would actually become an impediment to understandability and evolution.

This seems reasonable, right? Spread your functionality too thin and it might just evaporate like a puddle in dry heat. Sprinkle your functionality across too many classes and it will become impossible to find the functionality you want.

In fact the classes didn't lose their presence. Rather I would say that their identities came into sharp and unmistakable focus. The clarity and simplicity of their public members and interfaces made it virtually impossible to misuse them, or to mistake whether their innards do what they claim to. This enhances the value and impact of the code that consumes it. Furthermore it makes test coverage remarkably achievable, which is something I always struggled with when working test-after. On that note...

4. My tests have turned out simpler, and there are more of them.

The simple surface areas and limited responsibilities of each class significantly impacted the nature of the tests that I am writing, compared to my test-after work. Whereas I used to spend many-fold more time "arranging" than "acting" and "asserting", the proportion of effort this step takes has dropped dramatically. Setting up and injecting mocks is still a non-trivial part of the job. But now this tends to require a lot less fiddling with arguments and callbacks. Of course an extra benefit of this is that the test are more readable, which means their intent is more readily apparent. And that is a crucial aspect of effective testing.

5. I spent a measurable amount of time debugging my tests, but a negligible amount of time debugging the subject classes

There's not too much to say here. It's pretty straightforward. The total amount of time I spent in the debugger and doing manual testing was greatly reduced. Most of my debugging was of the arrangement portions of tests. And most of that ended up being due to my own confusion about bits of the mocking API.

6. I've never been so confident before that everything works as it is supposed to.

This cannot be overstated. I've always been fairly confident in my ability to solve problems. But I've always had terrible anxiety when it came to backing up correctness in the face of bugs. I tend to be a big-picture thinker when it comes to development. I outline general structure, but before ironing out all the details of a given portion of the code, I'll move on to the interesting work of outlining other general structure.

Test-first development doesn't let me get away with putting off the details until there's nothing "fun" left. If I'm allowed to do that then by the time I come back to them I've usually forgotten what the details need to be. This has historically been a pretty big source of bugs for me. Far from the only source, but a significant one. Test-driven design keeps my whims in check, by ensuring that the details are right before moving on.

An Unexpected Development

The upshot of all this is that despite the fact that some of the things I feared ended up being partially true, the net impact was actually the opposite of what I was afraid it would be. In my first week of test-first development, my code has made a shift toward simpler, more modular, more replaceable, and more provably correct code. And I see no reason why these results shouldn't be repeatable, with some diligence and a bit of forethought in applying the philosophy to what might seem an incompatible problem space.

The most significant observation I made is that working like this feels different from the other work processes I've followed. It feels more deliberate, more pragmatic. It feels more like craft and less like hacking. It feels more like engineering. Software development will always have a strong art component. Most applied sciences do, whether people like to admit it or not. But this is the first time I've really felt like what I was doing went beyong just art plus experience plus discipline. This week, I feel like I moved closer toward that golden ideal of Software Engineering.

Strategy vs. Tactics in Coding Standards

I'm starting a new job on March 7. As I wrapped up my time with the company where I've spent the past 3 years, I spent some time thinking about the decisions and responsibilities that I was given, and that I took on, while there. One of the reasons I was hired was to bring to the development team my more formal education and professional experience as a software developer. And one of the responsibilities I was allowed was to lead regular discussions with the other developers. The goal of these discussions was to help our team to become more professional in the way we developed software. Of course it was only a matter of time before someone on the team brought up coding standards.

I've had enough experience with coding standards to have developed some skepticism to the way that they are typically applied. It sounds good, in theory. Bring some consistency, order, and predictability to the source code produced by the group. But in my experience, the practice rarely delivers on the promise. It's easy to start a fight on this topic, of course. It tends to be a "religious" topic in that people have strong and entrenched opinions, and are rarely shaken from them regardless of argument. This is unfortunate, because I think there is a way to make them work. But it requires thinking about coding standards in a different kind of way.

The challenge of establishing standards, and the reason standards tend to be so fractious, is that "what works" must necessarily vary depending on the case. But very rarely does an organization who establishes coding standards allow for much variance in response to new problems or process friction. To figure out why this is, and how we can deliver on some of the promise of standards without resorting to brainwashing, let's take a closer look at the types coding standards we see in the wild.

In my experience, coding standards tend to fall into one of two broad categories. There are strategic coding standards, which are general and outcome-oriented. And there are tactical coding standards, which are specific and mechanics-oriented.

I'll address the latter first, as these are the kinds of standards that tend to start religious wars. Here are some examples of tactical coding standards:
  • All method headers must include a description, pre-conditions, side-effects, argument descriptions, and return value description.
  • Use C#'s "var" keyword only for variables of anonymous data types.
  • Place each method argument on its own line.

At this point you're probably at least nodding your head in recognition. Maybe some of you are nodding in satisfied approval, while others are holding back an outburst of disgust. When most people hear the words "coding standards", the tactical ones are the ones they think of. And these often evoke strong emotional responses. Their most common adherents tend to be managers or project leads, and often regular developers who've been around the block a few times.

The reason people want tactical coding standards is simple. They bring a modicum of consistency to a wild and woolly world. You often can't trust that the guy writing code in the next cube can code his way out of a paper bag. So when you have to take over his code after he's gone it's nice to know that, at the very least, he followed the standards. If all else fails, there is one thing that will provide sure footing as you plumb the dangerous depths of their code.

I understand this, and I sympathize. But at the same time, I chafe under these standards. They feel like micromanagement to me. It makes me feel like my coworkers don't trust me to do the thing that I was hired to do. There are organizations where this can't be taken for granted. But my own personal response to those environments is to leave them. I want to work with people I can trust at this level, and that can trust me the same.

It's not all bad for tactical standards, and we'll circle back in a bit. But for now let's look at some examples of strategic coding standards.
  • Write informative class headers.
  • Endeavor to keep methods to less than 25 lines in length.
  • Use whitespace and indenting to identify related code.

Strategic coding standards are broad. They do not tell the developer exactly what to write. Rather they set an expectation of the kind of code that should be written, and leave it to the developer as to how exactly that kind of code should be created. In my experience, these are the kinds of standards that are okay to mandate and enforce from a management perspective. Evoke the general feel of the code you want your team to produce, and then let them use their skill, experience, and professionalism to decide how to make it happen.

You can probably tell that in general I feel a lot better about this type of standard than the other. For one, they stay out of your way on a minute-to-minute basis. They're also far easier to get people to agree upon. Rather than being a cynical defensive measure against bad code, they empower the developer while fostering good coding habits.

Now having said all that, I do think there is a place for tactical standards. One big reason that I named these categories as I did is because I think that the role of establishing each type of standard can map to the organization similarly to how the roles map in the military, from which these terms are taken. Strategic standards can be mandated by leadership without stifling the developers. While the particular tactics can be determined by the people "on the ground".

There are many cases where tactical coding standards are beneficial. But it almost always serves the organization better to let the developers establish and enforce them. Take a team with multiple people all sharing equally in ownership of a codebase, each working from day to day on code written by the others. Imagine further that the code is sensitive in some way that can't abide regression bugs or the like. Maybe there is also a huge amount of code, or a large number of coders.

In these types of situations, it's crucial that the developers be able to pick up a strange piece of code and get a quick picture of what's going on. And to be able to modify the code without introducing a foreign-looking island of code to trip up the next guy. To do this, the developers have to be comfortable with the low-level style aspects of the code, both to read and to write. The best hope a team has of developing this comfort is to, as a self-directed group, come to agreement on common-denominator style standards over time and through shared experience.

There is a balance to be struck here. We want to ensure quality output from our teams, but we don't want to turn our developers into code-generators that don't take responsibility for their own work product. I think this balance is best struck by the proper application of each type of coding standard, in the proper context. If leaders can establish strategic standards that express high-level goals for clean code and understandable designs, then they can empower their teams to find solutions to the lower-granularity challenges such as understanding each other, and reducing the friction involved in cooperation on a shared codebase. This is strategy and tactics in practice, and at its best.

Glints of Profession

Recently I caught a few tweets from Patrick Welsh where he briefly mentioned pair-programming with his young daughter while she was home sick from school. In particular the second one referenced being "green", as in all tests passing. It struck me on a couple of different levels, even beyond the "awww" factor.

(Let me not understate the "awww" factor. Stories like this set anticipation brewing in me. I can't wait to be able to share my passion for programming with my own son or daughter. Hopefully she listens better than I did when my dad attempted to enact the rituals of fatherhood with me at his workbench, under the hood of the car, and in the kitchen. Sorry Dad! If I could do it again, I'd pay attention this time around.)

This brief snapshot of their interaction conjured visions of the apprentice at the elbow of the master craftsman, absorbing subtle waves of well-worn trade wisdom. We're talking about more than just "measure twice, cut once" type of stuff here. The equivalent of that might be "fail fast". Very good, very fundamental advice, to be sure. But testing and TDD are at least one level above that.

At the level of TDD, we're dealing with explicable, transferrable techniques. Not just muscle memory, but practices. Practices that, when followed, guide the hands of the initiate on the path toward skill, consistency, and quality. To me, this is a huge indicator of mature and professional behavior on the teacher's part. And such things are themselves markers of the continuing evolution of programming from simply a job to a profession.

Compare these types of hallmarks to the ad hoc, piecemeal, on-demand way that most programmers still learn the tricks of the trade. Most of us are still tossed into the job without mentorship to speak of. We're lucky if we've been taught theory in school, but this is incomplete knowledge at best. (This is like the difference between knowing how a car works, and being able to build one.) But a relevant degree is not the path most commonly arrived by for programmers anyway. Most programmers learn through the work alone, maybe supplemented by online information gleaned from Stack Overflow questions, or blog posts where people have documented their own unique travails. Lessons are gleaned through individual trial and error, with the occasional vicarious horror story of warning.

This story is still common. Though happier alternatives are becoming more common. Many of us make quite a show of seeking to make programming a true profession. There's a Craftsmanship Manifesto. There are apprenticeships. There are actual, literal journeymen, as well as prolific, sough-ought masters. But these are all bright trappings, and it is quite easy to lose track of substance.

Profession is not found in titles and roles, or documents of good intent. Profession is not even found in the practices themselves, for TDD does not a craftsman make. Profession is found in the fact that there ARE practices which are known to improve the quality and consistency of work. Profession is found in the wisdom that is passed on by mentors, in concert with the mechanical skills of the job. Profession is found in the common platform of discipline, integrity, and mutual education upon which quality work, worthy of pride, is built.

I haven't seen what happens in Obtiva's apprenticeships, or in Uncle Bob's classes, or when Corey Haines pairs with his hosts. I trust that they do bear forward, at least in part, the evolution of our chosen trade. But in the words and excitement of Patrick Welsh's little apprentice, I can see for myself the rough diamond of profession glinting in the light.

Community Service

As I see it, programming questions tend to take one of two forms:
  1. How do I do X?
  2. How do I do X in a clean, maintainable, and elegant way?
Matt Gemell has a great response to questions of the first form. I have nothing to add to his excellent post on that topic.

But I think it's crucially important to make a distinction between the two forms. The second form is a far more difficult question, asked far less commonly, and to which good answers are even rarer. When questions of this form are asked in forums, people are usually given a link to a page describing a pattern, or if they are really "lucky" a page with a sample implementation of that pattern. To be fair, patterns usually are the answer to these types of questions. But what we find written up on the web, or even in books, is most commonly pathetically oversimplified, without context, and often even without guidance on what support patterns are necessary to obtain the benefit or when alternatives may be preferable.

Essentially, most developers are left with no choice but to apply Matt's answer to form 1, to questions of form 2, in a much less information-rich environment. I contend that, while it may be one of those proverbial activities that "build character", it is ultimately more likely to be harmful to their immediate productivity--possibly even to their continued professional growth.

What we end up with is a pandemic of developers trying to hack out pattern implementations, being discouraged by the fact that the pattern seems to have no accounting for any possible deviations or complications in the form of the problem. Worse, developers are often dismayed to find that the one pattern they were told to use is merely the tip of a huge iceberg of support patterns without which the first may actually be more problematic than an ad hoc solution. Most often the developer in this position will end up determining that their deadlines will never allow them to go through the painful trial and error process on every one of these patterns, and accordingly drop back to the ad hoc solution.

It's time that we acknowledge that software development, whether you consider it an engineering discipline, an art, or a craft, has a history--albeit a short one. Things have been tried. Some work, some don't. There do exist "solved problems" in our problem space. To say that every developer should try and fail at all these efforts on their own ignores and devalues the collective experience of our community. Worse, it stunts the growth of the software development industry as a whole.

Yes, one can learn these things by trial and error. Yes, the understanding gained in this way is deeper and stronger than that gained initially by being tutored on how to apply the solution. And yes, there's a certain pride that comes with getting things done in this way. But this is not scalable. Putting each person forcibly through this crucible is not a sustainable strategy for creating experienced, productive, wise programmers. Every hour spent grappling in isolation with a question of "why won't this pattern do what I've been told it will" is an hour that could be spent creating functionality, or heaven forbid solving a new problem.

That is why those of us who have managed to obtain understanding of these "solved problems" must be willing to shoulder the responsibility of mentoring the less experienced. Being willing to explain, willing to discuss specifics, mutations, deviations, exceptions. Willing to discus process, mindset, methodology. These are the things that make the distinction between a programmer and a software developer, between a software developer and a software engineer.

The internet may very well not be the appropriate place to seek or provide this type of mentoring. I suspect it's not. And unfortunately, there are too many development teams out there buried in companies whose core competencies are not software, and consisting solely of these discouraged developers, lacking an experienced anchor, or even a compass. There are online communities that attempt to address the problem at least partially. ALT.NET is one such, for .NET technologies. But there really is no substitution for direct, personal mentorship.

So I would encourage young developers out there to seek out more experienced developers, and ask them the tough, form 2 questions. And I would even more strongly encourage experienced developers to keep a watchful eye for those in need of such guidance. Maybe even consider proactively forming a local group and seeking out recruits. Be willing, able, and happy to provide this guidance, because it benefits all of us. Every developer you aid is one less developer creating an ad hoc solution which you or I will be condemned to maintain, overhaul, or triage somewhere down the line.

Identity Crisis in Computer Science Education

A while back, the seeds of a post started rolling around in my head, inspired by my lack of satisfaction with the preparation my education provided me for a programming career. But I didn't quite know what I thought. Then not long ago, Joel Spolsky presented a radical repackaging of Computer Science degrees, supposedly geared toward producing better programmers, and my thoughts started to gel.

I knew what my personal complaint was, and I knew that it was connected to a larger problem with the state of computer science education in general. But I didn't know exactly what the larger problem was, let alone have any ideas worth sharing on what might be done about it.

Now that seemingly the entire remainder of the blogosphere has weighed in on this and related topics, I think I am finally ready to throw my two cents in. I'm going to barrage you with links in the following paragraphs, to ensure that I credit everyone I read who assisted me in coming to my final conclusions. Feel free not to click through, but be aware that they represent a rich cross-section of an important discussion.

It took me several weeks to realize that what we have going on is essentially a three-way tug of war from people in different regions of the vast sphere of software development, who need very different things out of their workers, and hence out of their education. Below I will give a run-down of some of the claims made, expressing the different forces pulling on CS graduates these days. You'll quickly see it's no wonder that the schools are so confused....

The Artisan Programmer

Joel laments the uselessness of theory courses in computer science curricula, saying "I remember the exact moment I vowed never to go to graduate school" and then proceeding to recall a terrible experience he had with a Dynamic Logic class. Jeff Atwood insists that real-world development environments need to be in place and mandatory, including, but not limited to, source control, bug tracking, deployment, and user feedback. Then, as mentioned above, Joel proposes offering BFAs in software development, to make darn well sure that none of the academic (in the pejorative sense) theory stuff gets mixed in unnecessarily. The upshot of most of these points are that computer science / programming degrees should spend as much time as possible teaching people what they need to know to go into a career in software development, writing business software or software products.

The Computer Scientist

Brian Hurt comes in from another direction entirely, and in the process makes some very good points about the true purpose of higher education. He lays the blame for the flood of single-language programmers entering the workforce at the feet of schools who do just exactly what Joel and Jeff are asking for. He makes some great points. And while he sounds more than a little reminiscent of the classic Joel post about the perils of java schools, his argument is much more thorough than just blaming the tools. Chris Cummer joins this party, wishing that his theory foundations had been firmer, and making an excellent analogy to the difference between someone who studies a language, and someone who studies language. We also have the respectable Raganwald, who although he has admirably pointed out good points from all sides, doesn't shy from offering his opinion that programmers ignore CS fundamentals at risk of their own career advancement.

The Software Engineer

But thats not all. Several people have weighed in from yet another direction. Robert Dewar and Edmond Schonberg wrote one of the posts that started off this blog firestorm. Along with alluding to a similar sentiment as Hurt and Cummer, they heavily criticize the state of software engineering education for focusing too much on how to use specific, limited tools, when there are more sophisticated ones available. They claim understanding these will allow software engineers to easily pick up the other tools that may come along and direct them to appropriate purposes. Ravi Mohan stops short of calling the education satisfactory, instead sensibly pointing out simply that an engineer who doesn't use standard engineering tools such as system modeling, isn't really an engineer. Mohan comes on a little too strong for me in the comments, but the posts themselves (of which there are also a precursor and a successor, and should soon be one more) are worth reading. Like the others he makes valid points.

Resolving the Crisis

Mark Guzdial is one of the few people that really puts his finger near the pressure point. Though maybe not near enough to feel the pulse beneath it when he did so. At risk of quoting a little too heavily....
Rarely, and certainly not until the upper division courses, do we emphasize creativity and novel problem-solving techniques. That meshes with good engineering practice. That does not necessarily mesh with good science practice.

Computer scientists do not need to write good, clean code. Science is about critical and creative thinking. Have you ever read the actual source code for great programs like Sketchpad, or Eliza, or Smalltalk, or APL 360? The code that I have seen produced by computational scientists and engineers tends to be short, without comments, and is hard to read. In general, code that is about great ideas is not typically neat and clean. Instead, the code for the great programs and for solving scientific problems is brilliant. Coders for software engineers need to write factory-quality software. Brilliant code can be factory-quality. It does not have to be though. Those are independent factors.
And there it is.... Different environments require different mindsets/approaches/philosophies. Research requires one mindset/philosophy of work, engineering requires another, and in-the-trench-based programming requires yet a third.

When a person suffers from a personality fracture, the resolution is often to merge the personalities by validating each as part of a whole. Fortunately, since we are not dealing with a person, we have the freedom to go another direction: make the split real and permanent.

Associate of Science in Computer Programming

To fill Joel and Jeff's need, the student who wants to work in the craft of software development / computer programming, who wants to be an artisan, needs to have an appropriate degree. It needs to provide them with an exposure to the generalized idea of the programming platforms and tools that they will have to deal with for the rest of their career. Lose the lambda calculus, compiler-writing projects, etc. These things not necessary for them to get stuff done in the trenches. But they do need to be exposed to the fundamental generalities that pervade programming. And they need to be prepared to learn at an accelerated rate while in the field. That just comes with the territory. Focus on core programming skills like program analysis, debugging, and test practices. Introduce industry-standard tools (emphasizing generality and platform-independence) such as source-control, bug tracking, etc.

I think a two-year associate degree is perfect for the code-monkeys and business programmers that just love to dig in and mess around with code, and don't want to concern themselves with the overarching concerns. Especially with these jobs increasingly being pushed offshore, computer science grads are rapidly being priced out of the market. An associate degree is cheap enough to be worth the investment for a lower-paying programming job. And it doesn't carry the overhead of any unnecessary theoretical content that they may not be interested in learning. It should be noted though that this type of programming job enters the realm of the trades, with all the associated benefits and drawbacks.

If you're looking for a more well-rounded individual capable of moving up out of this position into a lead position, or even management, then a 4-year bachelor of science (or Joel's BFA, but I tend not to think so) may be a viable option as well.

Bachelor of Science in Computer Science

There's not much to say about this degree, because if you look at all the schools that are famous for their CS degrees, this is pretty much what you'll find. Lighter on general studies, heavy on theory, heavy on math. Light on tools because the students will be expected to find (or make) tools that work for them. Light on specific language education because students will be expected to adapt to whatever language is necessary for their problem domain.

This is a degree that will produce people primed for going on to masters and doctorates. They will end up in research, "disruptive" startups, or working on new languages, OSes, etc. This degree is designed for people who want to work at the edge of things. Who want to solve new problems and push the boundaries. They are people upon whom will be placed the burden of pushing the state of knowledge in CS into the next era.

Bachelor of Science in Software Engineering

I am hesitant to propose this degree, because I am not certain that the practice of Software Engineering has evolved to the point where we have 4 years worth of general knowledge that's worth teaching, and that won't be out of style by the time the student's graduate.

It seems that some people, when they talk about Software Engineering, are talking about architecture and design, and others are talking about process, resource allocation, estimation, etc. To be frank, I don't think the former qualifies as a true engineering discipline. At least not yet. I don't know how much the modeling of programs that Ravi Mohan talks about is going on out there in the industry. I suspect that it happens more in the process of really big projects, and maybe in digital security. The second type of engineering people think of, however, I think is very similar to what we see in manufacturing, with industrial and process engineers. These are people who get an intimate knowledge of the domain, and then figure out ways to get everything to run smoother, more efficiently, and producing higher quality.

I can definitely see some education possibilities here, though I am not sure myself how to fill out the whole degree. It should at least encompass a good portion of the Associate of Science in Computer Programming, because they need to understand the intricacies involved. I can also see this degree teaching some of the more established measurement and estimation techniques found among the industry's established and experienced software project managers. Generally more management-related topics such as resource allocation, planning, product design, feature negotiation, etc. might fit in well here. Different project processes, testing/QA models, and of course an ability to keep up to date with technologies and platforms, are all par for the course as it's all critical for making decisions in the industry.

Conclusion

I really, honestly believe that Computer Science education as a whole needs a makeover. It needs more structure, more integrity in the vision of what each degree means, across schools. When someone has one of these degrees, you need to be able to reliably assume they should have learned certain things, regardless what school they went to. Many of the degrees currently on offer don't satisfactorily prepare their students for any one of the possible careers discussed above. I'm not saying their hand needs to be held from enrollment right on through to their first job. That's not the purpose of college. The purpose of college is to provide a cohesive education, directed to some relatively well-defined goal of capability and knowledge. Today this is tragically non-uniform at best, and absent altogether at worst.

So I see plenty of room in software development education for a clarification of purpose, and a readjustment of goals and curricula. A few different tracks, each geared toward a distinct section of the sphere with different goals and different responsibilities. And if we resolve to use existing terminology with some respect for the historical meaning of the words, we can re-use our existing nomenclature. But there can be no more of this muddy slurry of computer science, craft of programming, and software engineering all overlapping in claims of purpose, treading on each others' territory without care. Everyone can have what they are asking for. They just need to accept that no one can claim the "one true way".

I am not a Computer Scientist

Prepare yourselves. I have an embarrassing and melodramatic admission to make.

My career is a sham.

Although my degree and education are in a field that is typically referred to as "computer science". I am not actually a "scientist". Nor do I "practice science". But I won't be satisfied to go down alone for this charade. I'll going on record saying that I am convinced that for the vast majority of people who were educated in or work in the field of "computer science", the ubiquitous presence of the word "science" in proximity to our work or education, is a tragic misnomer.

I don't know how long this has been on my mind, but I know almost precisely when I became conscious of it. It was a couple months ago. I was newly exposed to devlicio.us, and perusing the blogs hosted there, when I came across a post by Bill McCafferty about a lack of respect and discipline in our field.

Early in the post, Bill reveals an injustice he encountered during his education.

...When I started my undergrad in this subject, I recall reading articles debating whether it should be called a science at all. Gladly, I do not see this argument thrown around much anymore.

I think I am probably not going to make the exact argument here that he disagreed with back then. The things we all studied in school are definitely part of a nebulous field of study that may rightfully be called "computer science". As Bill points out,

"From Knuth's classic work in The Art of Computer Programming to the wide-spread use of pure mathematics in describing algorithmic approaches, computer science has the proper foundations to join other respected sciences such as physics, concrete mathematics, and engineering. Like other sciences, computer science demands of its participants a high level of respect and pursuit of knowledge."

I have no argument with any of this. He's right on. Donald Knuth (who is indeed my homeboy in the sense that we share our hometown) studied and practiced computer science (which if you know anything about Knuth, you'll know is an almost tragic understatement). And thousands of people who have followed in Knuth's foot steps can lay the same claim. However, that's not me. And it's not more than 99% of all programmers in the field today.

Computer science suffers the same type of misnomer as many other disciplines who have adopted the word "science" into their name, such as political science, social science, animal science, food science, etc. And it seems that most such fields, if not all, have done so because the very validity of the field of study itself was subject to severe criticism at some point in the past. So we take on the term "science" to get it through people's heads that there is a root in formal practices and honest intellectual exploration. But to then blanket every profession that derives from this root with the term "science" is a misappropriation of the term.

I can think of a number of examples.... The programmer working for the bank to develop their website, or for the manufacturing company to manage their transaction processing system is no more necessarily a "computer scientist" than the election commentator is necessarily a "political scientist". When someone gets an electrical engineering degree and goes to design circuits for a living we do not say he "works in electrical science". We say he is an electrical engineer. When someone gets a technical degree in mechanics and then goes to support or produce custom machinery, we do not say he "works in mechanical science". We say he is a mechanic, or a technician. Why, then, when someone gets an education that amounts to a "programming degree", and then goes to work doing programming, do we say that he "works in computer science"? It's a uselessly vague and largely inappropriate label.

By contrast, if you have a doctorate in computer science, I'm prepared to say you deserve the label. If you write essays, papers, articles, books, etc. for use by the general practitioner, you probably deserve the label. If you do research, or work on the unexplored fringes of the field--if you are exploring the substance and nature of the information or practices that the rest of us simply consume and implement, then in all likelihood you deserve the label.

Please, please understand that I am by no means belittling the value of our work, or the nobility of our profession. Often we simply consume the information produced by true "computer scientists". But we transform it from theory into practice. We resolve the concrete instances of the abstract problems that the true scientists formally define. We take the pure thought-stuff produced by scientists and turn it into tangible benefit.

This is not trivial. It is not easy. It deserves respect, discipline, study, and care. But it is not "practicing science".

I should say in closing that I am not as upset about all this as the tone of this post might imply. I don't even really have a big problem with the use of the word "science" to refer to a field of study or work that largely does not include research-type activities. I don't like it, but I accept that it happens. But "computer science" has a problem that other similar "sciences" don't. When someone says they work in "political science" or "food science", you can make a guess as to the type of work they do, and it's hard to be significantly incorrect. Though maybe it's my outsider's naïveté that allows me to make this claim. At any rate, "computer science" as a field is so broad and vague that I don't think the term communicates a useful amount of information. But you wouldn't know that by talking to programmers, who seem only too ready to attempt to take hold of the term and own it for themselves.

I think this is one small facet of a larger and far more critical issue in our field in general, which I fully intend to write more about very soon. But until then, lets take the small step of starting to consider what we really mean when we use much of the popular but often ambiguous terminology when discussing our profession.

I work in the field of computer science. This tells you nothing except that I am unlikely to be a prime specimen of the wondrous human physiology. But.... I am a programmer. I have a degree in Computer Engineering. I am interested in programming theory. I work as a software development consultant. And now, you know something about what I know and what I do.

Now what about you?

Update: I forgot to note that in McCafferty's blog entry, he himself makes use of "trade" terminology to categorize different levels of reading materials. Which belies the uncertain nature of programming as a profession. We certainly wouldn't say that a carpenter works in the "wood sciences", would we?