Followership

Last year I read a book that broke my brain, called "Leaders Eat Last". I had begun to think that a "management" role might be somewhere in my future, and I knew enough about the topic to realize that people tend to disrespect "managers" and respect "leaders". I thought I should try to learn the difference, and whether and why it matters.

What "Leaders Eat Last" is about is essentially the idea of "servant leadership." The book, through a number of stories, illustrations, and explanations, argues that what makes a leader out of a manager, or a person in any other role, is a dedication to putting the team ahead of individual ambitions and obligations. The book is full of examples of managers, executives, military officers, and other people in leadership roles, taking on what some might consider undignified tasks or unnecessary risks, to make sure that the team gets a win, that the more vulnerable members aren't left behind, etc.

As a person with no official authority I have struggled with translating this into advice I can enact. But lately it occurred to me that if you take the consideration of official position or role out of the picture, what's left is very simply a good teammate, a good follower. This has proven a jagged pill to swallow. While I think I have good instincts, valuable perspective, and decent judgement... I don't think I have historically been a good follower.

Whether it was sowing dissent because I believed the manager was making bad choices, or keeping learning opportunities away from junior devs because I wanted to make sure things got done right, at the time I felt certain I was doing the right thing. In hindsight, and in the context of the idea of servant leadership, it's clear that at best I was being selfish, and at worst I was a seed of dysfunction hiding behind the guise of shipping product.

I am humbled. Clearly I have plenty of room to grow. But for once, I think I know what direction I need to grow in, and that is comforting. And it's strangely empowering to know that the path to greater impact and responsibility lies in taking on a bigger burden and building up others, rather than worrying about positioning and appearances and aggressively pursing and defending correctness.

Constance

When I was first learning to program, one of the fundamental things that every introductory language course covered was the idea of a "constant". I never thought too much about them at first, because the idea seemed so simple and obvious. A thing that is constant does not change. So a constant variable is one that does not change. Easy. What's next?

In the courses, what's next was invariably something like functions or pointers or string interpolation. But I think new programmers might be better served instead putting a follow-up question next.

What does it mean for a variable to "change"?

The C++ language, as an example, embraces the ambiguity of the question and tries to answer all interpretations. Consider a pointer to an object type value. You can change the pointer address, or replace the value, or you can mutate bits of the value itself. C++ allows you to lock down all of those kinds of change independently, even down to individual fields internal to the object.

C# walked away from this madness and made things very simple, but still not very intuitive for newbies. C# has two different kinds of invariable variables: "const" and "readonly". How can we understand them as easily as possible? We ask, "What does change mean?"

With regards to "readonly" and "const" in C#, change means simply to be assigned different instances over time. And the distinction between readonly and const is at what point in the life of the variable this change becomes prohibited.

A readonly field is one whose value cannot be reassigned after it is initialized. Initialization can happen at the point of definition, or in a constructor. Local variables cannot be readonly. I haven't seen a reason for this except that the CLR doesn't have this feature, and the benefit to adding it at the language level was low.

public class A {
    readonly List<int> _one = new List<int> { 1, 2, 3 };
    readonly List<int> _two;
    
    public A() {
        _two = new List<int> { 4, 5, 6 };
    }
    
    public void DoSomething() {
        _one = new List<int>(); // won't compile
        _two = new List<int>(); // won't compile
    }
}

A const variable is one whose value cannot be reassigned after the program is compiled. This has a couple of natural consequences that are surprising unless you start from this definition. At compilation you cannot construct complicated objects or assign references, you can only assign primitive literals and nulls. And while local variables can be const, fields that are const are implicitly also made static. Why? Because it saves memory, and there's no reason for them not to be.

public class A { 
    const List<int> _one = new List<int> { 1, 2, 3 }; // won't compile
    const int _two = 1;
    
    public int DoSomething() {
        return A._two; // static!
    }
    
    public int DoSomethingElse() {
        const int three = 2;
        
        three = 3; // won't compile
        _two = 2; // won't compile
        
        return three;
    }
}

Why Should They Care?

There's a nice little post over at Very Fancy whose insight is worth your click: Users Care about Software Design

There are a lot of things that we do, as developers, that we tend to feel are very important. A lot of these activities are often viewed by "business people" as a necessary evil, a cost center, symptoms of the finicky nature of perfectionists, or even a distraction to be minimized.

I think in reality developers, even experienced ones, tend to do a very poor job of defending our practices in terms that make sense to anyone else. I think it's an easy argument to make that, generally, carefully built software is more valuable than hastily built software. But software is also not valuable unless it is marketed, sold, delivered, explained, and supported. The tool for coordinating these things is called a business. And in business most questions are ones of degree. To misquote the apostle Paul, all things are possible, but not all things are profitable.

In a business, we are all supposed to be working toward a common goal. If you're lucky, that goal is not just to make money, but to actively improve the world for your customers. Even in that wonderful scenario, the software you deliver is meant to contribute to one of those goals, at least indirectly.

The unfortunate thing about indirect contributions is that they tend to be invisible to everyone who hasn't been involved in them. If you have a good manager, they might be able to make the connection for others on your behalf. But you won't always, or probably even usually, have a good manager. So in the end it's no one's responsibility but our own to defend the activities that we know are important, by explaining how they contribute to the common goals of the organization.

There's a line connecting what you do to why it's worth other people's money for you to do it. You can see it, implicitly. But sometimes you have to draw that line in nice fat marker for other people to see it.

 

Ch-ch-ch-changes

I took a new job at the beginning of August. I wasn't at my last gig very long, but I have only good things to say about them. There's a decent chance I'll end up back there at some point in my career. But I'm excited for the new job in all sorts of ways I'd have a hard time putting into words.

One thing I can talk about is how the daily worky-work is going to be different. My last job was spent almost entirely writing a node web server and a single-page web app, and all development was done in an Ubuntu VM. The new job will be like the one before the last one. I will be doing .NET web and service development, fully immersed in the Microsoft ecosystem.

Things I'm going to miss:

  • The power and concision of working with a *nix CLI. Powershell just ain't the same.
  • OS-level package manager (apt-get)
  • Git. 'Nuff said.
  • The light footprint of Sublime Text
  • The compactness of server-side JavaScript. You can accomplish *so much* with so few lines, due to the absence of type system noise and the robust FOSS package ecosystem.
  • A culture of command-line build tools.

Things I'm not looking forward to:

  • Static types. For all their strengths, you can end up jumping through a lot of hoops and contorting your code into very unpleasant shapes in order to make the static type system happy. It only gets worse if a framework or library has decided to leverage it to solve a problem for which it's a poor fit.
  • The complexity of Visual Studio. So. Many. Features. It's really a mess in a lot of places.
  • TFS version control. After using Git exclusively for almost a year, going back to TFS feels like using a computer without a keyboard. Sad panda.

Things I'm not going to miss:

  • Ubuntu as a desktop environment. It looks and feels good, for a Linux. But that's an awfully low bar.
  • Linux video drivers. Just.... Ugh. What a mess.
  • The Unix Way: 20 ways to do anything. All of them involve text processing. 16 of them are kludgy, and the other 4 don't work consistently.

Things I'm looking forward to:

  • Static types. Apart from the typical discussion of the benefits of static typing, there are some things that are just quicker and simpler to express via the type system. In JavaScript, for those problems you've got to write the engine and define the declarative data structures that will be processed by it.
  • The power and convenience of Visual Studio. I've never encountered a dev tool quite like it. It can understand your code and give feedback like no other And it has tons of really handy tools to do things for you that would be annoying to deal with by hand.
  • Applying the new perspective and wisdom I've gained from working in a dynamic language to write better static-typed code.
  • Trying to convince my boss to let me use F# for something, somehow.

How I Moved My Blog - Part 2

In the last post, I talked about why I decided to get a new domain and use a new service for my blog. More importantly, I also explained why I decided to move all the old content to the new domain and service. I left off with the unanswered question:

How do I move the content without breaking people's old links or destroying my "Google Juice"?

The short answer is 301 redirects. 301 is the HTTP status code for "Moved Permanently." It basically is a signal to anyone--whether human, browser, or web robot--who comes looking for a URL that it should go look in a specific other spot for it. And oh by the way that is the new place to always look for the content that used to be at the old URL, so just forget the old one and replace it with this new one in all of your records.

Put simply: A 301 is how you tell search engines to transfer the rep from one domain to another. It also conveniently pushes any browsers on to the correct URL if they happen to click an old link somewhere.

You can set up a domain-wide 301 in most any DNS service by way of a URL record, but it generally only allows you to take a particular domain or subdomain and send it straight to an unadorned other domain or subdomain. Doing the redirects this way would mean that any link to a specific page of my old blog would go to my *landing* page instead of the new home of that specific content. Which would effectively break old links, and would destroy the ranking position of any individual piece of content. So exactly what I wanted to avoid.

Next I went looking at whether the blogging services themselves could do the job for me. It turns out that Blogger does have a way to configure 301s, but only for individual URLs, and only within the exact same domain and sub-domain. Disappoint.

Squarespace, as it turns out, is much more amenable. They will let you configure a page to redirect to an external URL. They also feature customizable blog post URLs. This hatched a hare-brained scheme in my mind: Move Turbulent Intellect, whole and complete, over to Squarespace, and then redirect everything to the other blog.

  1. Pay for a second Squarespace blog.
  2. Import the old content to the second Squarespace blog.
  3. Ensure all new URLs are identical to the old ones.
  4. Reassign the old domain from my Blogger blog to my second Squarespace blog.
  5. Add redirects for each of the imported blog posts in the Squarespace TurbulentIntellect blog over to the new new blog.

Alas it was not to be. Blogger blog URLs get a .html added to the end of the slug, and Squarespace does not support the dots in their custom URLs.

So now I was back to the option I had been hoping through all this to avoid: setting up a server somewhere just to do the redirecting.  I've never set up a public web server before. I've barely set up private web servers, to be honest. And those I did were IIS. So I took my first tremulous steps into these waters and signed up for a Digital Ocean account.

I went with Digital Ocean because friends recommended it, and because they have a $5/mo tier where I get a tiny little virtual server that would provide plenty of power and very little learning overhead. I went with an empty Ubuntu server, and installed only just what I needed. Fortunately, my most recent gig afforded me a chance to get comfortable with rudimentary Linux administration and SSH. This would have been painful without that as I wouldn't have known where to start, despite my needs and the tasks being very simple.

I grabbed the Ubuntu dev VM I had laying around from my node.js experiments, used the RSA key I set up for Github, and SSH'ed into my shiny new virtual server, and started walking through Digital Ocean's great tutorials on configuring an Apache web host and installing the mod_rewrite module I knew I would need for regex-based dynamic redirects.

The file where you set up redirects is called .htaccess. It took some playing to figure out what I needed in order to redirect with and without the www subdomain, and how to make sure I correctly captured and converted the slugs I cared about. In the end, I ended up with a handful of lines, below, that explicitly redirect a few URLs that don't follow any predictable pattern, and then use regex to handle the vast majority of posts.

With this in place, all that was left was to hook everything up in the right order. Order turns out to be sort of important if you want to make sure Google doesn't think you're content scraping. Step, by step I:

  1. Reconfigured my DNS to point turbulentintellect.com at my Digital Ocean server's IP.
  2. Took down my stub blog at Squarespace and put the imported blog in its place.
  3. Turned on the new blog.
  4. Tested out all the URL permutations I could think of, plus a few patterned ones just for good measure.

And here you are!


How I Moved My Blog - Part 1

I recently moved my blog. I moved it between blogging service providers, and between domains. This turned out to be a fair amount of work and decision-making. I thought that other people who are considering a similar transition might benefit from seeing how I dealt with it.

I considered leaving the old content at the old service and domain, especially because it was at Blogger, which was free. But I had grown dissatisfied over the years with Blogger's available templates, and have never been super happy with their composition tools.  And I never really was satisfied with the old domain. It was good enough, but I was always on the lookout for something that really sang to me. And more, I didn't want to split up my blogging history over such an arbitrary line in the sand.

Moving services can be a pain, but it seems like most modern services and tools can handle exporting and importing, especially from an 800lb gorilla like Blogger. The domain change is a big deal, though, for a couple of factors. Firstly, there are a few people and sites that have linked to me over the years. While I'm no Jeff Atwood, or Rands, I didn't want to "break the web" in even a small way, if I could avoid it. Secondly, you can really screw yourself by way of the search engines, if you're not careful in how you copy or move content.

I ruled out just duplicating the content fairly early on. For starters, I didn't want to split my traffic. (Also a reason not to just leave the old stuff where it was and post only new stuff to the new domain.) It would dilute search rankings, and it would give anyone who ends up at the old domain a potential dead end that might cause them never to find my new stuff. And most importantly, if you just duplicate the content, you're all but sure to get one or the other domain flagged as a content-scraper and de-listed from the search engines.

So that left moving. Fortunately Squarespace has a convenient tool that scrapes posts and comments from Blogger's RSS feeds, cleans it up, and plops it right into a fresh Squarespace blog. So that grunt work was dealt with.

The next step was figuring out how to move it without suffering undesired consequences. How do I move the content without breaking people's old links or destroying my "Google Juice"?

Moving

I'm happy to announce that I will be moving my blog to a new domain, and to Squarespace, sometime in the next week. The RSS may or may not continue to work, depending on how your reader handles the redirect. And Blogger's search and monthly list pages will stop working. But the main page, resume, and all blog posts should permanently redirect to the corresponding pages at the new domain. (The exception being Blogger's post search and list pages.)

The new blog will be at whilenotdeadlearn.com, and everything is already set up over there except for the content transfer, if you want to go check it out. And just to be safe, I'd recommend you subscribe to the new RSS feed at whilenotdeadlearn.com/blog?format=rss, even if the redirect does end up working for you.

Bugs Will Always Exist

Recently I had a Twitter exchange with a colleague I respect, which included the following statements. (My own interjection is elided for dramatic effect.)


Based on my shallow familiarity with the math of programming and the physics of computation, I expect that bugs are not only physically but mathematically inevitable in any real software system. I can only make my argument from a statistical standpoint, but I think this suffices.

Consider:

  1. According to Turing's Halting Problem it is impossible to prove the correctness of a program without exhaustively testing over its input space.
  2. The behavior of most production software is determined not only by direct, immediate inputs, but also by the history of those inputs via database storage, and by temporal, environmental factors via connected systems, communication channels, and local hardware. This makes the "true" input to a software system at a point in time immensely complex and nuanced, in the mathematical terms required by the Halting Problem. If you view a program as a single, albeit complex, function, it's not far off to say that its input and output can be expressed in terms of the current and next "state of the world". This terminology is actually used, in some specialties.
  3. This is especially relevant when you consider that this essentially makes most modern software into a multivariate, discrete feedback system. Such systems are founded on assumptions of noise, error, divergence, and instability. Controlling these fringes in complex systems becomes fiendishly complex very quickly.
  4. In this age of the ubiquitous Internet and the blossoming Internet of Things the number of software systems connected to any given piece of software continues to compound, ballooning the complexity of the inputs and the outputs.
Taking all of this as premise, the sheer likelihood of finding, let alone resolving, every significant bug vanishes into the infinitesimal.

It is a statistical certainty: bugs will always exist.

But this is not a shield. This is a mantra. It exhorts me always to approach my work with humility and diligence. My software can only be stronger as a result.

Recurring Revenue

I've spent a number of years in the IT and software services market. Everywhere I've worked, there have been executives talking in frenzied tones about the holy grail of recurring revenue. In the minds those companies who haven't yet sipped that sacred solution, it almost invariably takes the form of support contracts. Occasionally a brash individual even dares to hope for training revenue.

The thing about recurring revenue (all revenue, really) is that it represents a return on investment. It doesn't come from the ether. Some decision maker took some money out of the company's pocket and put it to work somewhere. The revenue is the dividend on that investment.

Revenue happens when other people give the company money. They, likewise, are hoping to make an investment that will pay them dividends. When they pay you for a product or a service, generally it's because you or your fans have convinced them that you're going to help them kick butt in some way.

This means that when you spend money to build a product or a service offering you are--in a very real way--investing in your customers. Selling a product or service makes a statement of faith and commitment to the customer. It says,
 

We have faith that you can do awesome things, and we are confident we can build something to help you do it. We only ask that you share with us a little bit of what it earns or will earn for you.


So with that in mind, consider the company that looks past the sales and subscriptions, with dollars in their eyes, lusting for the support contracts and the training fees. What *exactly* is this company investing in?

The answer is left as an exercise for the reader.

Make Things Small

If I were given the opportunity to speak a single sentence of advice to all the budding programmers in the world, I would need only three words: "Make things small."

Thankfully, here in this space I can also elaborate.

All other things being equal, code ought to have a fractal structure. That is, the general shape of the code should be scale independent. It should look similar regardless at which level you examine it.

Specifically, the code within any explicit logical boundary you might choose should consist of a bundle of one to several units at the next lower level.

  • An application should consist of one to several sub-systems.
  • A sub-system should consist of one to several namespaces.
  • A namespace should consist of one to several modules.*
  • A module* should consist of one to several functions.
  • A function should consist of one to several expressions.
  • An expression should consist of one to several sub-expressions.

I would assert that code structured in this way is optimized for understanding, testing, and evolution. And here my qualifier becomes relevant. All other things are not always equal. Sometimes it is necessary to optimize, or at least account for, other aspects.

Here are some of the other things that may not be equal from one boundary to the next:

  • Time constraints
  • Resource (memory/storage/processing) constraints
  • Consumer expectations
  • Dependency APIs

When fractal structure conflicts with these constraints, then you have found the reason for compromise and pragmatism.

* Module roughly equates to a class, datatype, prototype, or object (or just a module, for languages that have them).

Javascript Sucks As Much As You Let It

A poem in free verse.

JavaScript is a very flawed programming language.

So are the overwhelming majority of other programming languages.

A large portion of JavaScript developers don't understand or care what makes a language suck or not, nor what makes code well-crafted or not. They just code until they have something that's not unusable, then stop.

When they write it, JavaScript sucks.

A second large portion of JavaScript developers believe JavaScript sucks. They try to expend as little effort as possible, writing as little of it as possible, as quickly as possible.

When they write it, JavaScript sucks.

A third large portion of JavaScript developers believe JavaScript sucks. They plaster over it with foreign idioms from languages they prefer, then try to forget what's underneath.

When they write it, JavaScript sucks.

Then there are those of us who embrace JavaScript as a tool like other languages, whose results lie in respect and artful use. We find natural idioms that amplify the "good parts" and eschew the "bad parts" of the language.

When we write it, nobody cares whether JavaScript sucks.

You Are not an Island

Yep, today I'm going to bang that drum again. It's been a while. And as I sit on Twitter and daily watch the struggles of those who want to learn more than their education and immediate professional environment can offer, I think we are all in need a reminder.

Programming seems to me to have some strong similarities to medicine and law. In these professions, schooling is understood to be only one part of the necessary education. There is a formal science (or with law, a formal-ish system) underneath, but the work being done on top of that substrate is often creative, intuitive, or even artistic. The skills and experience required to work that magic aren't learned in the classroom or from a book, but rather from a combination of doing the work, and of guidance from those who've walked the path before.

But programming as a pursuit still seems to lack something that law and medicine have: a fairly consistent pattern of education and growth as part of a group. There is essentially no such thing as a physician autodidact. And though it seems possible to be so in the realm of law, it also seems essentially non-existent.

In the realm of programming, autodidacticism is not only common but admired and encouraged. Some glorify it as the only way to produce the best of the best. Depending on who you ask, formal education is viewed alternately as a way to shore up foundations and push beyond personal walls, or as a waste of time and money that closes more mental horizons than it opens.

I find this terribly unfortunate. While it's true that many people feel they "learn best this way", and some of them probably do, there are inarguable benefits to a more planned education, group study, and most of all to true mentorship. Planned education helps to set a smooth ramp of learning. Group study provides an exchange of ideas and a moral support network.

But this kind of learning culture is extremely rare to find in the programming world. There is a spirit of independence and individualism in the programming community at large that I think not only suppresses diversity, but also hinders our growth individual practitioners and holds back the state of the art in the professional community.

Yes, this personal trial by fire highlights strong individuals and purges dross. But is it worth it to lose so many with potential, and so many that merely require a different sort of path? The fewer quality programmers there are, and the more homogenous their background, the more limited the potential of the craft as a whole.

No developer should be an island. The webs of knowledge we see sprouting up with Twitter and developer conferences like CodeMash or That Conference is a great start, but it's not enough. As a community we need to find a better way to bring up our amateurs into professionals. In the meantime, as individuals we can only do ourselves a favor by seeking out mentorship, and finding trusted colleagues andestablishing a shared growth with them.

Bespokists vs Gluers

Another in a long line of posts eschewing JavaScript frameworks has hit the web and the community response has been predictably dichotomous.

Being a programmer, I of course have an opinion on this. But I try to retain a bit of nuance when looking at questions like this.

I think it's only fair that we acknowledge that the Moot folks are being nuanced as well. I mean, they're not advocating total vanilla JS where everything is bespoke, artisanal, hand-crafted code. They do use libraries. They're just avoiding frameworks. But the argument can often take on a similar form. To the average developer someone eschewing frameworks appears very close to a Luddite.

The arguments against this position are fairly simple and obvious. Why take on extra work when someone else has done it for you? Why reinvent the wheel? Why waste the effort, experience, and wisdom that are baked into a framework? Every discipline has tools, it's foolish to insist on working with your hands when a power tool gets the job done faster, and often better. Et cetera.

But again, the position isn't to avoid all prepackaged code. It's to avoid a particular type of abstraction that takes away responsibility from the developer for a whole class of design decisions. Choosing a framework makes an opinionated statement, whether intentionally or not, about what the structure of the solution should look like. The problem, as identified by Moot and their compatriots, is that it this delegates the details of that statement to a third party, relinquishing a large degree of design control that could have a dramatic impact on the evolution of your application.

Realistically speaking, eschewing all frameworks means you have a lot more to do to reach the same point as a team who starts off with a framework. The difference is that you'll be able to tailor your code to your needs in nearly every aspect (with the qualifier "given enough time" often going unstated.) The framework users on the other hand will be taking on a lot of noise that can very easily obscure the core points of the solution they are building for the problem they are tackling. It can make it harder to look at the codebase and pick out what exactly is being accomplished by all the code, because most of the hand-written code is gluing different things together. The thing that is most prominent is not the solution, but rather the structures mandated by the frameworks.

These are all great points. The problem I see with it is that the trade-off isn't usually truly between slow, steady, and fitted versus rapid, noisy, and constrained, as it's represented to be. A huge factor in how these different strategies play out is time available, and team skill level. Frameworks may create noise, and impedance mismatches, and encourage you to fit your code into structures that weren't meant to fit your specific problems (e.g. not every screen benefits from MVVM, as opposed to MVC, or Passive View). But having no frameworks means you need to have the experience and wisdom available on your team to know how to put together the patterns you need, and when to use what. Not to mention that building infrastructure to streamline these things is not exactly a trivial task either.

There's a place in the world for folks who want to lovingly hand-craft every line of code that they can. But it's likely always going to be a niche. These are often the folks who explore and push the boundaries, finding new economies and elegant simplicity, or even new patterns. But for the rest of the developers, frameworks are tools. If they help deliver features to customers, then they aren't going anywhere, and there's nothing wrong with that. Realistically, no one thinks that framework X is the alpha and omega of solutions to problem Y. This too will pass and be improved upon.

The best and most experienced engineers will, absent other constraint, likely choose to start somewhere in the middle.

Opinions

Opinionated frameworks are pretty popular right now, with good reason. They take a set of good, proven decisions, and just make them for you. Take the dilemma of choice and the risk of building naive infrastructure or inconsistent conventions right out of your hands. An opinionated framework doesn't (necessarily) claim to be doing things "the right way", but any advocate will tell you that the reason they're so great is that they do things in a "good way", and make it hard to screw that up. It sounds like a great pragmatic compromise.

Unopinionated frameworks are also fairly popular right now, with different folks, and for different reasons. Sometimes, people get sick of the golden handcuffs of an opinionated framework that didn't imagine you'd need to do X and has no way to gracefully take advantage of any of the offered functionality to do so. Or you start to realize more and more that the "happy path" the framework sets you on isn't really so happy, and tweaking the assumptions would mean you'd write more testable code, and less of it. But you can't, because that violates the opinions of the framework, and so you're stuck. So next time, you vow to use an unopinionated framework that gives you a nice solid substrate to build on. Good abstractions over the layer below, lots of conveniences for otherwise finicky code, and the freedom to build whatever patterns and infrastructure you want.

I have to admit that for the past couple of years I've been in the "unopinionated" camp. I had some bad experiences with opinionated frameworks making it difficult to take advantage of a mature IoC framework like Autofac. Then fell victim to one that was actively standing in the way of using a slightly different structural design pattern. When I'd had my fill of these frustrations, I ran straight into the waiting arms of unopinionated frameworks.

And I bought the sales pitch on those too. Isn't it great to be able to choose whatever pattern you want? Isn't it wonderful to be able to build all the infrastructure you need to streamline your dev process? Isn't it rapturous to be able to separate responsibilities into as many classes and layers as you need to make things testable? Well, yes it is.... and yet, it also kind of sucks.

Emergent architecture is great. But having to roll it all from scratch really, really sucks. It's a burden and a weight on the team. Especially when you have inexperienced devs. The last thing you want to happen is for features to be stuck waiting for someone to have time to build the right infrastructure. And only slightly more desirable is to pile up masses of code that could be cleaner and simpler if only the infrastructure or automation was in place in time.

So the shine has come off the penny a bit for me. When you use an unopinionated framework, even if you have your own opinions, there's a good chance you're going to get caught in a morass of infrastructure work that will bottleneck teammates efforts. Worse, it adds to the complexity of your application, piling up layer on layer of non-feature code that has to be tested, fixed, and evolved over time.

But I still love unopinionated frameworks. It's just not for the reasons they are usually trying to sell you. I don't really want to trade in one set of handy conveniences and annoying constraints for the freedom of infinite choice burdened by a whole lot of finnicky infrastructure backlog. Depending on your situation, this might be an okay trade-off, but it might also be really unwise. No, instead, what I love about unopinionated frameworks is that they give you a sandbox in which to experiment with better opinions.

Whether that means building something from scratch, or using add-ons or plugins that the community has already put together, the point is to think, play, experiment, and build something better. Not necessarily to build something different every time, or go back to cozy NIH syndrome.

The truth is, opinionated frameworks are great. They really do offer great efficiencies when you buy in and follow the happy path as best you can. When the going gets tough and you realize the tool isn't cutting it anymore, it's time to look for different opinions. So see what else is out there. See what new opinions people are forming in modular fashion on top of their unopinionated framework of choice. Maybe even build your own opinionated framework around those better opinions.

Because in the end it's not the presence or absence of opinions that matters. Heck it's not even really the quality of the opinions. It's the trajectory of the quality over time. And whether you're clutching tightly to an opinionated framework, or emerging every aspect of every project's infrastructure, you're probably spending too much time writing code and not enough time thinking about how that code could be better, simpler, and easier for your team to write and maintain.

But what do I know? That's just, like, my opinion, man.

How do you know what the right tool is?

Bob: "Use the right tool for the job."
Alice: "How do you know what the right tool is?"
Most programmers aren't typically building things from the bare ground up. We're also not building castles in the sky. We use libraries, frameworks, and platforms to establish a foundation on which we build a thing. These are all tools, and unfortunately the programming profession's standard toolbox contains mostly conceptual tools. Beyond that, nearly everything is subject to replacement or customization. Whether it be a UI pattern framework, or an ORM. So how do you choose?

The advice to "use the right tool for the job" not constructive in terms of actually finding the right tool for the job. It's a warning to consider the possibility that you might not be using the right tool right now. And in my experience it's almost uniformly used to warn people away from over-engineering things or building custom infrastructure code.

So again, how do you choose? Alas, like so many things in this world, choosing well requires experience. Not just experience with the tools in play, but with the problem space you're attacking, and the constraints around the system. This disqualifies most developers from choosing the "right tool" up front, in all but a narrow band of problems to which they've had repeat exposure.

It's not always about "speed" or "simplicity". Even those metrics can be seen in different ways. Is it more important for structures to be consistent, or to eschew noise? What does it mean to be "fast" under a restrictive deadline that's a month away? How about a year? Are your developers more comfortable customizing infrastructure code to reduce overhead, or following a plan to reduce risky decision-making?

Every tool fits these situations differently, and it almost never has anything to do with whether it uses MVC, MVVM, or something home-rolled. Evaluating tools effectively requires you to at least think about these questions, as well as about what the framework does or doesn't do or what design decisions it leaves on your plate. Minimally you need to consider not just whether the tool can do the job you need, but how it does it, as well as how well it "fits your hand".

Most tools I've encountered fall into one of the buckets below, which I find very helpful in "choosing the right tool". The important part is to sit down and really think about both your needs, and the tool's trade-offs.

  • Good for modest jobs that have to be done quick, trading design and decision effort for speed out of the gate.
  • Good for long-lived software that will see many iterations of support, maintenance, and evolution, where incremental speed is traded away for flexibility and stability.
  • Good for unfamiliar territory, providing an easy on-ramp at the expense of graceful handling of edge-conditions.
  • Assume a particular structural pattern, taking a lot of boilerplate and/or automation work off your hands, but locking you into that structure even when its an ill fit.
  • Good for experimenting with new patterns or varying existing ones, trading ready-made automation and assistance for a firm but unopinionated substrate.
  • Investments that pay dividends on team familiarity and expertise, trading a graceful learning curve for raw configurable power.

Every tool is unique, trading away these conveniences for those. Sometimes you'll know what this means, but often you won't. You'll never really know before you use a tool if those trade-offs are going to help you more than hurt you. Just like you don't really know what will be the hardest challenges of a project until you get to the end and look back. Get comfortable with this fact. Don't let it chain you to a single tool just because it's a known quantity or you're comfortable with it. Odds are good you're going to run into a job sooner or later where that's not the biggest concern.

So experiment, learn, and reflect on both the tools and your own knowledge. It'll help you make better choices, and not just for tools.


Year in Review

As a programmer who deals with the typical stress of imposter syndrome and the tech skill treadmill, I've long had an ever-present guilt about how I spend my free time. It used to be that at the end of the year, I would look back at what I spent my waking time on, and the next thing after work was usually video games, or a combination of games & TV, by a landslide. This was discouraging, because I couldn't avoid thinking about what I was missing out on, professionally, because that time wasn't spent on experiments or projects or freelancing or business ideas.

That changed this year. Don't get me wrong, I still struggle with choosing between "relax" or "be productive" when I find downtime. I still get discouraged when I think about just how long it will take me to get good at something new, or build the app I want to sell. But in reflecting on this year, I don't have regrets about how I spent the vast majority of my waking non-work time.

So what's different? What did I change?

Well, in 2012, my wife and I had a daughter. And in 2013, the activity I spent the most waking non-work time doing was, without a hint of a contest, being a dad.

That's a skill that I'll never struggle with letting get rusty, and it's a passtime that I'll never wish I'd done less of. It's something that I don't even need to think about to know that I'm making the right choice with my time, every time.

And as we sit here on December 31, 2013, on tenterhooks waiting for the arrival of our second child literally any hour now, I've never been more certain that my life is on the right track.

Happy New Year, everyone!