Wait For It... (Or Don't)

If you wait long enough, someone will build that app or service you so desperately wish existed. They will gladly sell you a license or membership, and you can stop wishing.

Why wait? Solve your own problem. Sell the license or membership to those other folks, instead. At minimum you get to stop wishing sooner, since you get to beta test your solution. Probably you also learn a little bit. And at best, you make some money. You can use it to fund your next solution. Or maybe just have a bit more personal security. It's all upside.

The Role of a Database

I have worked on a number of what I would call database-centric applications. A kind way to put this is that these are applications that take full advantage of the features of their chosen DBMS to implement as much as possible of the system. The goal of this is to maintain a monolithic system as long as possible because one system means fewer boundaries. This means fewer degrees of freedom and fewer opportunities for breakage. It also means less integration and change process overhead.

The premise can be attractive, when you have a homogeneous and overstretched workforce. We need something to happen every 5 minutes, but we don't have anyone who knows how to write a console app or a Windows service or a Linux daemon. Fortunately SQL Server has a job system for running periodic T-SQL tasks, so no need to learn anything or hire anyone.

As someone who is comfortable writing console apps and Windows services, I am decidedly uncomfortable writing periodic tasks as SQL jobs unless the goal is solely to do database-y things like project normalized data into a reporting schema or stage data for an ETL pickup. I said as much to a coworker not too long ago and he challenged me on why I wanted nothing but data in the database, when the tools were capable of so much more. Schema can model relationships, constraints, and invariants. Tasks can be scheduled. Stored procedures can encapsulate implementation details. Triggers can be used for eventing.

I had to think a bit in order to put it in terms that weren't dogmatic. I had to consider why I dislike programming inside the database. Here are some of my reasons:

  • The languages and libraries are spartan and hard to work with.
  • While not inevitable, databases are commonly viewed as monolithic from the inside. Subsystem design and responsibility segregation have huge upsides, but are exceedingly rare inside the bubble.
  • Testing and deployment tooling is clunky and slow where it exists at all.
  • External controls are generally unavailable. Everything is an admin interaction, or occasionally an exceedingly clunky script in a proprietary language.

I think you'd be hard-pressed to find someone comfortable both in and out of the database who disagrees with most of these, or who couldn't probably add a bullet of their own to the list.

What this all adds up to is that database-centric processes and applications are harder to build, harder to change, and harder to migrate, than the equivalents built outside the database. This is especially true if the change being considered is to migrate logic out of the database engine. Whereas if you started off building outside the database, the biggest hurdle to moving something into the database, say for performance reasons, is whether the crucial features exist at all inside the database bubble.

The universal constant of programming is that change happens. If the tools and the skills are available, the tools that are better supportive of change should be preferred. And that is why I prefer my databases not to multi-task. I want them to store data. I want them store it in ways that are efficient to update and query in necessary ways. And then I want them to get out of the way.

Blog Stubbing and Refactoring

Sometimes I have an idea for a blog post, but it's not a good time to stub it out. Maybe I don't have my notebook, or I just don't have time to do more than jot a placeholder. Sometimes in those cases I will open a new blog entry and put in a title that summarizes the idea. Later I will come back and start writing under that title, trying to retrace my thoughts and produce the post that was waiting to be born.

More often than not in these cases I ramble for several paragraphs and never quite manage to recapture the sentiment that inspired the title. Eventually I close the editor in frustration, feeling like I failed my muse. I feel like the title was the true name of my post, and I have failed to honor it.

This is probably not a healthy or effective way to write.

I don't follow this cycle with code. Often I will run up against a blob of functionality that I know I don't want inline. I have a rough idea of what the responsibility of that blob is and what parameters are required, so I stub things out. I make an interface, give it a function with the expected signature, then continue on using it, knowing I will go back to implement later.

But often when I come back around to flesh out the stub it turns out I was wrong about what exactly it would do, or--more importantly--what context would be required to accomplish it. But in these cases, I don't just keep laboring under the ill-informed constraints I set before. Instead, I refactor. I change the name. I add the parameters I need. I update the tests for the call site, and the code there as well.

In programming we name the thing to represent what it does. We don't build the thing to fit the name. In programming it is not possible to accomplish something without the necessary preconditions, so if they aren't present we adjust course.

In programming, we must rework.

So, too, in writing.

Culture Is a Garden

Culture is a garden, and your people are gardeners. Every decision, every meeting, every email, every priority, every exception--each is an act of gardening. Every line of code written, or removed. Every test. Every tool. Every deployment. Water this, yank that, prune here, fertilize there. Some idea gets planted or trod down. Some relationship is nurtured or ignored.

You can't create culture, or design it, or decree it. Even as a leader or manager, you can only tend your plot. Occasionally, you can add or remove a gardener, and this can have a big impact. But in the end the culture you grow depends fully and only on what gets planted and nurtured on a daily basis, and what gets stepped on or neglected.


A thing I have learned over the past couple of years is that it doesn't matter how cool the technical problems are, or how awesome your solutions and processes are, if the business problems and the people problems aren't being addressed. The best tech, under genius orchestration, produces a mere whimper in the dark if the people can't rally behind it or the business can't focus and direct it.

I used to look a bit askance at the axiom that "ideas are cheap, execution is everything" until I realized just how much is encompassed in the term "execution". I always just considered technical execution. As if adding a qualifier to a word ever did anything more than narrow and reduce it. But no, "execution" includes product strategy, marketing, organization, interpersonal dynamics, and much more. There are so many problems in those areas that have both deep and broad impact on the success of the enterprise that the core idea, no matter how rare and genius, pales in comparison.

I used to take the axiom personally, as a repudiation. I can execute. I have the chops. But I'm not an idea man. So where are all these ideas just waiting for me to run off with them? For the axiom to be true I must either be dumb or incompetent. But that completely misses the point. Good ideas aren't necessarily plentiful or easy to produce. But they cost nothing. Whereas execution....

Execution is everything. Not least because execution is everything.

A New Objective

There is a new permanent page here on my website for my current career objective. This started out as a long blurb that used to be crammed into my resume page along with what is now my value proposition, but it deserves its own place. All the parts are much cleaner for it, and it allows the objective to breathe as much as it needs to. The other bits are mostly the same, but the objective has also changed, and I intend to keep it current as time goes on.

I was inspired by a number of things to make this change to a clear, strong, forward-looking statement of intent and aspiration. Recent events, the wisdom my friend @veryfancy links to on Twitter and blogs about, the evolution of my wife's career, a long following of @rands (In Repose), and a few very good books I've read lately. Change can come at any time, and when it does I want to be ready to capitalize on it and ride the wave forward rather than be swept up in the current and dropped off wherever it peters out.

In the long term, technical and business strategy is the problem space I want to get to. And I know that to have the kind of impact I want to have at that level, I need facility in the problem space of people, and teams, and projects. So I want to make sure my next step adds a component in that dimension to my career vector... Hrm. As you can see, I'll always be an engineer at heart. Only, I have realized that I also want to be more than an engineer.

The Right Toolbox

A significant part of maturing as a professional is learning to recognize the right tool for a particular type of job. As a programmer this skill may manifest when choosing between alternative solutions at many different levels of abstraction and granularity.

Some examples include decisions between:

  • a loop versus an iterator block
  • a Windows service versus an scheduled task
  • a message queue versus a web service

The next step of professional maturity requires recognizing that all of the decisions like those above have something in common. They are all custom software solutions. This means that there is another decision which has already been made. This is the decision to use a custom software solution.

Crucial to the process of gaining more trust, responsibility, and impact in your job is developing the skill of recognizing whether the solutions being considered are of the right kind. This usually involves asking a lot of "whys." In the scene below, Alice is a development lead and Bob is the IT director she reports to.

Alice: Why are we splitting this service into two separate services?
Bob: Because it's memory intensive and has been bogging down the rest of the process.
Alice: Can we add RAM to the VM?
Bob: Not without taking it from other VMs that can't spare it.
Alice: What about the physical hardware?
Bob: We have room to expand.
Alice: Can we add RAM to the server, and give it to the VM?
Bob: We can do that once more with this hardware, but if we run out of headroom again, we'll need to procure hardware, and that won't be cheap.... A RAM upgrade will definitely be cheaper than the engineering effort to split the process up. This time. But if we run out of headroom again, the next upgrade will be much more expensive.
Alice: Ok. Then rather than split the service now, we'll do an effort estimate so we can do a real cost comparison next time around.

It can be tough to cultivate the discipline Alice exhibits in this scenario. Alice wants to be a problem solver, and is eager take on more responsibility rather than waving it away. There are a lot of incentives to assume that important analysis has already been done and it's above your pay grade to worry about it. But for that exact reason, taking care in these kinds of situations can have big impact and visibility to the broader business.

What this all boils down to is that to take the next step in professional maturity, you need to consider a broader context. Before you sit down to decide which of your tools is the right one for the job, make sure you're even digging in the right toolbox.


Sometimes while coding I have an idea of some small construction I want to try that I'm uncertain will work. I could fairly easily determine whether it will work by writing a test and running it, but it's just so small and simple that even that seems like overkill. Often, I just want to know if something will compile, and what the type system will do with it.

I just want to write a little bit of code, run it immediately, see the result, and move on, possibly forgetting about it. I need a REPL. Depending on your primary programming language, you may be familiar with this already. But if you are a C# developer, especially one without a long history in development, this might be new to you.

The term REPL is an acronym that is pronounced like the word "ripple," but with an "E" instead of an "I". The acronym stands for Read, Evaluate, and Print Loop. This is a small app that reads a statement of code, evaluates (or executes) it, and prints the result. It then repeats that process with the next statement or waits for the user to enter another one.

If you work with JavaScript, you might recognize the developer console from your favorite browser as a JavaScript REPL. With node.js, the application that executes programs is itself a REPL, if you run it without passing a starting point JS file. The same goes for Ruby, Python, and a great deal many other modern languages.

With C# and VB.NET, Visual Studio offers a special window called the "Immediate Window" which operates as a REPL while debugging a program. It has access to things in the scope of the breakpoint, and you can also define variables and build up complicated expressions and statements on the fly. Unlike the other REPLs I've listed, Visual Studio's Immediate window has historically been constrained in what it will execute. For example, it won't evaluate anything that involves an anonymous delegate, or lambda. Unfortunately for me, it seems that nearly every time I want a REPL, I'm experimenting with lambdas in some way.

Fear not. A while back Glenn Block, while he was working at Microsoft, started an OSS project called ScriptCS to provide "the missing REPL" for C#. It, too, started off heavily constrained. But after many iterations and a great deal of work, today it's a darn useful tool in a couple crucial ways. Not only is it a full-fledged REPL that has access to the entire C# language and all of the CLR, but it's also an honest-to-goodness scripting system. It will allow you to run C# files as scripts from the command line without pre-building them into assemblies. It has all the convenience of node, ruby, python, or whatever other scripting language you might be familiar with. They accomplished this by using of Roslyn, the .NET "compiler as a service" from Microsoft which will also underlie new versions of Visual Studio.

If you're working with C# on a daily basis, I highly recommend you go grab and install ScriptCS. It is distributed via a package manager called Chocolatey right now, and will hopefully eventually be available via OneGet. (Think NuGet for apps.) I use it at least once per week in its REPL capacity, and I anticipate it coming in handy for build and deployment automation as a script runner. It's great tech. And if you use it and like it, make sure you thank Glenn and the other folks who worked hard to build it.


Balancing My Best

I've been a perfectionist for a long time. I've been a parent for far less time. In the overlap, what I have felt like is mostly a pile of inadequacy and failure. I know this isn't really the case. But I definitely feel like I'm not meeting my own expectations for my performance either as a parent or in my profession.

For most of my life, I would hedge my failures by dumping extreme amounts of time and effort into them. If I stayed up all night embellishing my project, hopefully the flaws would be overshadowed by how hard I had worked. Hopefully any criticism of what I got wrong would be balanced against praise for how far above and beyond I had gone. I did my best, which is surely the most anyone could ask. Hopefully the equation would come out with me feeling more satisfaction than failure.

As a normal human adult who has more obligations than time, this drive--this instinct--has not exactly served me well. It skews priorities, both within the effort, and with other obligations. And it's not sustainable. Whether it's my health, or my relationships, or my job performance, something gives, usually sooner than I expect. So that's just another way I'm inadequate. Which drives my perfectionism. The cycle continues.

I have very young daughters, and am starting to see the first inklings of how they handle failure. They are at the stage still where they can become frustrated to the point of tantrum when they aren't able to succeed at something. But this is strictly because at their age they also haven't learned how to give up. Their best is to literally try, and try, and try, and try, until I step in and drag them away screaming. At which point, I can certainly say, "it's okay, you did your best." But will it encourage them to continue to do their best, if it equates to ending heaving and sobbing with the knowledge that they were insufficient to their ambitions?

I find myself considering how I will approach with them the topics of failure and success, effort and perseverance, pride and satisfaction. Surely whatever I tell them, I'd better believe it. And even better, I should model it, because example is strong where advice is weak. So what do I really believe about these things?

There is no learning without failure. I must embrace failure if I want to improve at anything. Failing feels really bad, though, so I must learn how to salvage satisfaction from failure. Not accomplishing what I set out to do doesn't mean I can't be proud of what I did. "Doing my best" does not require destroying myself and my other obligations in sacrifice. Whatever I'm setting myself to do is not all I have to do. I have to be a parent. I have to be a husband. I have to be a son. I have to be a friend, a neighbor, an employee, a leader, a follower.

Doing my best means that I spent my time wisely, and appropriately to the need. I focused on my task and my goal, for the time that I worked at it. Where I lacked certainty, I tried new things. I was reflective of the outcomes. I learned. I used what I had and gave what I could, without stealing from my other obligations.

And maybe most importantly, doing my best doesn't mean that no one can fault me. It just means that I laid a foundation to do better next time.


Very nearly every company I've worked for seems to have a constitutional aversion to "slack" in the work pipeline for development. Sometimes this manifests as a pile of inward-facing operational development that passes hands every few weeks and never really gets done because as the current developer is urgently re-allocated to a paying client. In consulting, sometimes it just means a stretch without pay when there's nothing to bill on. Sometimes it results in developers being assigned increasingly low-value, low-clarity, or low-interest busywork.

All these things are an extremely poor usage of available developer resources.

These situations seem to originate in a view of engineering manpower as either a cost center (e.g. in an IT department) or a kind of inventory (e.g. in a contracting/consulting firm). For accounting purposes, fine. But that doesn't mean you have to actually treat them that way. I'm not sure if my view is any more valid, but I tend to think of engineering slack as a surplus. We have the developers' time. It's probably paid for. And even if it's not, it's probably bad for morale not to pay for it. Sure you can burn off the excess and get not much more than waste heat out of it. Or, you can invest it, and make back the cost in dividends.

Nearly any company has systems that can benefit from a little more automation, a little more customization, a little more integration. This is what the vast majority of developers in business are doing. Most of them are just doing it because the work is going to actively mitigate an operational cost or support a revenue stream. Sometimes the direct value added isn't worth the cost of the developer's time. But good engineering often pays dividends in indirect value, via force multipliers or ongoing and compounding efficiency.

Not all work is created equal, but if you look carefully, there's probably some benefit to be had from a little extra development time. And the next bit can compound on that. And the next. And the next. Before you know it, your operations could be humming like finely tuned machinery. Or you have an experimental beta feature that could be your next surprise hit. But you'll never know if you keep burning off your excess instead of investing your surplus.

A Case Study in Toy Projects

I have historically had a lot of trouble finding motivation to work on toy projects solely for the benefit of learning things. I tend to have a much easier time if I can find something to build that will have value after I finish it. I view this as a deficiency in myself, though. So I continue to try to work on toy projects as time allows, because I think it is a good way to learn.

One of my more successful instances of self-teaching happened a few years ago. I was trying to learn about this new thing all the hip JavaScript kids were talking about called "promises". It seemed to have some conceptual clarity around it, but there was disagreement on what an API would look like. Multiple standards had been proposed, and one was slightly more popular than the others. So I thought it might be fun to learn about promises by implementing the standard.

And it was! Thankfully the Promises/A+ spec came with a test suite. It required node.js to run it, so I got to learn a little bit about that as well. I spent a few evenings total on the effort, and it was totally worth it. I came away with as deep an understanding of promises (and by extension most other types of "futures") as it is probably possible to have. This prepared me better to make use of promises in real code than any other method of trial-and-error on-the-fly learning could have. 

Here's the end result on GitHub: after.js. The code is just 173 measly lines--far shorter than I expected it to be. It also hosts far more lines of documentation about promises in general and my API specifically. It has a convenient NPM command for running my API through the spec tests. And most satisfying of all it can now serve as a reference implementation for whoever might care to see one. I think it's a great example of the benefits of a toy project done well.

Interviewee Self-Assessment

Evaluating the technical chops of job candidates is difficult. Especially in a "screener" situation where the whole point is to decide if a deeper interview is a waste of time. I haven't done a lot of it, so I'm still developing my technique. Here are a few things that I think I like to do, so far.

As long as the candidate is in the ballpark of someone we could use, I don't like to cut the interview short. There's always the chance that nerves or social awkardness are causing an awful lot of what might appear to be ignorance or confusion.

I like to ask the candidate what technologies (languages, platforms, frameworks) they most enjoy and most dislike, and why. This gives me a peek into how they think about their work and what their expectations are of their tools and platforms. I want to see at least one strong, reasoned opinion in either direction. Not having one is an indication that they either lack experience, or are not in the habit of thinking deeply about their work.

Here's the big one: In order to figure out what questions to ask and how to word and weight them, I also like to ask the candidate to evaluate their skills in a few of the technologies that are relevant to the job they are applying for. Even if they have identified their relative skill levels on their resume, I ask them to put themselves on a 5 point scale. Zero is no experience. One is beginner. 2 is still beginner. 3 is comfortable. 4 is formidable. 5 is expert.

At self-rating of 1, I mostly just want to find out what they've built and what tools they used. Anyone who rates themselves at 2 or 3 is a candidate for expert beginner syndrome. They'll probably grow out of it as they get more experience. I ask questions all over the spectrum to establish what they know and what they don't.

A self-rating of 4 is probably the easiest to interview. A legitimate 4 should have the awareness of self and the space to see that they know a lot, but also have a good conception of where their gaps are. 2s and 3s are more likely to self-label as 4, but they are easy to weed out with a couple challenging questions. Beyond that, I mostly care about how they answer questions, because this candidates value is as much in their ability to communicate about tough problems and solutions as it is in coding and design.

A self-rating of 5 is essentially a challenge. I'm not interested in playing a stumping game. But I do care whether the confidence is earned. Someone who is too willing to rate themselves an expert is dangerous both on their own and on a team. A 5 doesn't need to know everything I can think to ask. But I expect an honest "I don't know" or at least an attempt to verbally walk it through. And instead of confusion and misunderstanding, I expect clarifying questions. Communication and self-awareness are crucial here. Confident wrong answers or unqualified speculations are bad news for a self-proclaimed expert.

What Type of Type Is That?

The .NET runtime has two broad categories that types fall into. There are value types and there are reference types. There are a lot of minor differences and implementation details that distinguish these two categories. Only a couple of differences are relevant to the daily experience of most developers.

Reference Types

A reference type is a type whose instances are copied by reference. This means that when you have an instance of one in a variable, and then you assign that to another variable, both variables point to the same object. Apply changes via the first variable, and you'll see the effects in the second.

public class Point {
    public double X { get; set; }
    public double Y { get; set; }

// Elsewhere...
Point p1 = new Point { X = 5.5, Y = 4.5 };
Point p2 = p1;
p1.X = 6.5;
Console.WriteLine(p2.X); // Prints "6.5"

This reference copy happens any time you bind the value to a new variable. Whether that's a private field on an object, a local variable, a function parameter, or a static field on a class. The runtime keeps track of these variables as they float around and doesn't allow the memory holding the actual object to be freed until it is sure that none of the references are in scope of any active objects.

Value Types

A value type is a type whose instances are copied by value. This means that when you have an instance of one in a variable, and then you assign that to another variable, the second variable has a new object, with the value of each property, and which you can change independently of the original value. 

public struct Point {
    public double X { get; set; }
    public double Y { get; set; }

// Elsewhere...
Point p1 = new Point { X = 5.5, Y = 4.5 };
Point p2 = p1;
p1.X = 6.5;
Console.WriteLine(p2.X); // Prints "5.5"
Console.WriteLine(p1.X); // Prints "6.5"

Value types can get tricky. The thing to remember is that this policy goes only one level deep. The properties of a value type have their own copy type, and that will be how they get copied to the new containing object.

public class Point {
    public double X { get; set; }
    public double Y { get; set; }

public struct Line {
    public Point Start { get; set; }
    public Point End { get; set; }

// Elsewhere...
Point p1 = new Point { X = 0, Y = 0 };
Point p2 = new Point { X = 3, Y = 3 };
Line l1 = { Start = P1, End = P2 };
Line l2 = l1;
p1.X = 1;

Console.WriteLine(p1.Start.X); // Prints "1"
Console.WriteLine(l1.Start.X); // Prints "1"
Console.WriteLine(l2.Start.X); // Prints "1"

Here we see that the changes we make to the reference type instances are retained across the value types, because it's only the bit of information that points at the reference type that is duplicated, not the object that's pointed to.


The last thing we should talk about is memory. Unfortunately, this bit is complicated despite most often being inconsequential. But it's a question that a prickly interviewer might decide to quiz you on if you make the mistake of claiming to be an expert.

You might guess that based on this difference in copying behaviors that passing around complex value types would be computationally expensive. It is. And potentially memory-consuming as well. Every new reference is a new variable with new copies of its value-typed properties. Instances also tend to be short-lived, though, so you have to work to actually keep the memory filled with value types. Unless you box them.

"Boxing" is what happens when you assign a value type to an object variable. The value type gets wrapped into an object, which does not get copied when you assign it to other variables. This means that you can end up with very long lived value types, with lots of references to them, if you keep them in an object variable. Fortunately, you're not allowed to modify these values without assigning them back to a value typed variable first.

public struct Id {
    public int Value { get; set; }

Id id1 = new Id { Value = 5 };
Object id2 = id1;
Object id3 = id2;

Console.WriteLine(id1.Equals(id2)); // Prints "false"
Console.WriteLine(id2.Equals(id3)); // Prints "true"
((Id)id2).Value = 6; // Compiler error

Folks will often talk about stack and heap when asked about the differences between value types and reference types, because stack allocation is way faster. But value types are only guaranteed to be stored on the stack when they are unboxed local variables that aren't used in certain ways. The decision whether to do so in other cases is often not dictated by the CLR spec, so depending on the platform it might or might not do so in a given situation. In short, it's not worth thinking about unless you are bumping into out of memory errors. And even then, there are almost certainly more permanent wins to be had than by worrying about the whereabouts of your local variables.

Trust Those Who Come After

I have at times in my past been called a very "conservative" developer. I think that title fits in some ways. I don't like to do something unless I have some idea what the impact will be. I don't like to commit to a design until I have some idea whether it is "good" or not, or how it feels to consume the API or work within the framework.

And I used to believe very strongly in designing things such that they were hard to misuse. This was so important to me that I would even compromise ease of proper use if it meant that it would create a barrier of effort in the way of using something in a way that I considered "inappropriate".

I once built a fluent API for defining object graph projections in .NET. While designing the API, I spent a lot of time making sure there was only one way to use it, and you would know very quickly if you were doing something that I didn't plan for. Ideally, it wouldn't compile, but I would settle for it blowing up. I also took great care to ensure that you always had an graceful retrograde option when the framework couldn't do exactly what you needed. But that didn't matter.

Once the framework got into other peoples' hands I realized fairly quickly that all this care had been a tremendous waste of time. The framework was supposed to be a force multiplier for the build teams at my company, but what happened was very different. Because the API had to be used in a very particular way, developers were confused when they couldn't find the right sequence of commands. When what I considered to be the perfect structure didn't occur to them, they assumed their situation wasn't supported. 

I gave my fellow developers a finicky tool that the practice leads told them it was fast and easy and they needed to use it. So when it wasn't clear how to do so, they just stopped and raised their hand, rather than doing what a developer is paid to do: solve problems. By trying to protect the other developers from themselves, I had actually taught them to be helpless. And the ones that didn't go that route just totally side-stepped or subverted the tools.

All this came about because I didn't trust the people who would use or maintain my software after I was gone. I thought I needed to make sure that it was hard or impossible to do what I considered to be unwise things. In reality all I did was remove degrees of freedom and discourage learning and problem solving.

We are developers. Our reason for being is to solve problems. Our mode of professional advancement is to solve harder, broader, more impactful problems. If I can't trust other developers at least to learn from painful design decisions, then why are they even in this business, and what business do I have trying to lead them?

How to Build Types From (almost) Scratch

In JavaScript, we have objects, and closures, but we don't have data types. At least, not custom ones. There are the built in types and there are objects. But just because the runtime doesn't distinguish one from the next doesn't mean you can't build your own data types and benefit from them.

All you *really* need to build data types is closures and hashes. And those, JavaScript has. In fact, a JavaScript object *is* just a hash with some extra conveniences added on. This makes the exercise really straightforward in JavaScript.

function createUrlBuilder(domain, queryString) {
    var protocol = "http";
    var self = {
        protocol: protocol,
        domain: domain,
        queryString: queryString,
        render: function () {
            return protocol + '://' + self.domain + '&' + self.queryString;
        setDomain: function () {
            self.domain = domain;

    return self;

Every time you call this function, you will get back an object that obeys a certain contract. You can always be sure that what you get has the same properties and the functions all have the same signatures. It even has the ability to encapsulate data in variables that you define locally to the function. It turns out you can even interact with this the way you would any other object, because of the way JavaScript works.

But that's just gravy. You could do this same exact thing in C# with anonymous delegates and a Dictionary<string, object>. It's more verbose, and the syntax for actually making use of it doesn't sync with what the type system and compiler provide. But the result is the same as in JavaScript. You get a constructor that produces a structure that has both state and behavior, both private and public, and whose contract is consistent every time you call the function.

An Object is a Closure

We know what a closure is. It's a function that gets a little bubble of data pinned to it at runtime, which it can then make use of wherever it goes.

If we really want to shorten this explanation down, we might say that a closure is a bit of behavior tied to a bit of data. As it happens, this is very similar to how objects were described to me when I was first learning object-oriented programming. In his book "Object-Oriented Analysis and Design with Applications", Grady Booch says that "an object is an entity that has state, behavior, and identity." Most other things that an object is are derived from these attributes. Identity is just bonus for our purposes, but in JavaScript at least, it also happens to be true of functions.

An object is an entity that has state, [and] behavior...
— Grady Booch

Now lets squint a little at that object. Focus on some things and let others fade into the background. For example, lets imagine an object with just one public member function. And then, imagine it also has no public properties or fields. It does have some private fields, though. And those private fields are initialized by the constructor of the object's type.

So now what do we have here? A member function, with a little bubble of data pinned to its object at runtime via the constructor function, which it can then make use of wherever it goes.

Sounds familiar.

Define: Closure

If you're working in a modern programming language, you likely use closures from time to time. It's a fancy sounding word, but the meaning is simple. A closure is a dynamic function defined in some other function's scope and then either returned or passed off to some other function. But specifically, it is a function like that which references variables from the scope of the outer function in which it is defined. One place they are very commonly used today is in event handlers.

Here's a trivial example in JavaScript.

// Somewhere in your code
function showModalPrompt(modalOptions) {
  // show a modal with a message, a dropdown, and ok and cancel buttons

// Somewhere else in your code
var options = [ { id: 1, text: 'Email' }, { id: 2, text: 'Phone' }];
$('button#contactMethod').click(function () {

As of this writing there are probably three most common uses of closures: DOM event handlers and AJAX callbacks in browser JavaScript, and callbacks in node.js.

The mechanism defines a function that will be called by some other piece of code somewhere else. This function can make use of a piece of information that it doesn't create or fetch on its own, and which the caller has no knowledge of. And yet, this information is also not passed in a parameter. It's carried in by the function, in it's pocket, ready to pull out only at the appropriate time.

So why the funny name? A function that doesn't return anything and doesn't call any other functions encloses its local variables, like a bubble of air. The variables live and die with the function call. But if you define a dynamic function and return that, or hand it off to some other function or object, it forms another bubble. This inner bubble carries with it the bits of information it uses, closing around them as it leaves the safety of the parent function's bubble. This is a closure.

The Excel UI

Anyone who has worked on "internal" software--the stuff you write that your customers never see, but your co-workers use every day--is probably familiar with the idea of "forms over data" or "CRUD apps". The idea is that the quickest path between a schema in an RDBMS and an app to maintain the data in it is to make two screens for each table. One screen has a grid of the data in the records in the table, and the other screen is a form with fields for each column used to edit or create rows in the table.

If the users express a need to do "mass edits", you might find yourself searching for an editable data grid control that will let you edit data right in the list/table display, rather than editing one row at a time in a specialized form. And if you're especially unlucky your users will ask that fateful question: "can you just make it work like Excel"?

The answer is that of course you can. You can buy a dev license for some fancy user control for your platform of choice that will attempt to imitate Excel's grid interaction paradigm as closely as possible, while giving you all sorts of knobs, dials, and hooks with which to customize, extend, or otherwise deviate from said paradigm. Or you can just commit to the devil's deal and use VBA to customize and extend actual Excel spreadsheets.

Actually, if you're not in a position to resist the demands, that's probably the least of all evils. I mean, VBA is pretty gross as a development platform. But it's also extremely low barrier to entry. And you don't have to answer questions about why things don't quite work like Excel, and why you can't make them.

But regardless which direction you go, if you find yourself giving your users an Excel-like interface, there are probably some decent reasons why. Either your team can't afford to take the time to build a task-oriented UI (because doing UI well is hard,) or you just honestly don't know what your users need to do with their data. So you need to provide a way for users to work with their data when neither they nor you know what their task flows are yet, without a big engineering investment.

Don't know what your users need to do? Can't afford to find out? No shame in that, do what you gotta do. Don't care? Have no intention of ever finding out? Then... Well... Maybe a little shame is appropriate.


I know of one surefire way to destroy the morale of a group of engineers. That is to take away their ability to finish things. There are a number of root causes that can lead to this, but the proximate cause is usually a lack of vision, focus, or courage in the people responsible for setting priorities.

A lot of developers got into this gig because of an intrinsic motivation rooted in the feeling of accomplishment derived from completing the construction of a useful thing. When you tell these folks to solve a problem and let them pour their focus and effort into it, then stop them and say "actually, that's not so important, work on this other problem instead," you rob them of their intrinsic reward for good work.

Jerking a group of developers around like this is a good way to end up with sad and bitter employees. And that's the best case. More likely you'll end up with ones that feel betrayed and act belligerent, or even  disloyal. Developers don't like thrash. They don't all react the same, but very few handle it well. One thing is for sure: it's no way to get people to do their best.