The Sintered Mind

Update: This post was originally titled "The Annealed Mind", but it turns out that "annealing" is not what I thought it was. What I was thinking of is actually something more like "sintering". I have replaced the words throughout the post and title.

If you want to learn something, take on a new challenge, or just refine your approach to a common challenge, it's best to wipe your slate first. Regardless of your level of knowledge and experience, intentionally bring nothing of your own to the table, so that what's already in your head doesn't drown out subtle information you're not expecting. This is a very simple description of the Zen Buddhist idea called "Beginner's Mind."

What is the opposite of Beginner's Mind? It might be tempting to reverse the analogy and think of a full slate. You bring with you a reference tome of such exhaustive completeness that you feel you barely need to look at the problem before thumbing through it to find the answer. That's certainly evocative of at least one problem with expertise.

I think another illustrative way to consider both of these states is to think of the mind as a landscape, and the wisdom of experience as being constructed on top of it. As with physical buildings, these fixtures need to be maintained: occasionally scoured clean, worn bits replaced, obsolete pieces upgraded, buildings removed, replaced, or added. If this doesn't happen the landscape will subsume the buildings. Vegetation encroaches, the soil erodes here and heaves there. Constructions slide, collapse, or are swallowed up. Over time, even if they are internally functional, on the outside one might mistake them for part of the land, rather than human construction.

This is dangerous. Imagine a mind where assumptions and knowledge and solutions are taken so much for granted that they have been sintered into the surface of the mind. The thinker no longer even recognizes that they created them. They are just there, and true. Part of the substrate.

All learning requires unlearning. But how can you dismantle an idea that no longer recognize is an idea?

Wipe your slate. Be your own mental groundskeeper. However you want to think of it: remember that your experience is just ideas. Ideas are and should be soft and temporary things that grow, and change, and are replaced.

Team Wanted

I dislike working alone. This is despite the fact that I'm an introvert and really do appreciate my quiet and focus.

Part of this is that I really enjoy the camaraderie of having peers close at hand. From a personal standpoint, this is better for my mental well being, and my morale regarding the job in general.

The professional downsides of working alone are also significant. A team is a support structure. When you can't bring your A-game, they can compensate. Ironically, a team can also *facilitate* focus. When there are other folks involved, you don't need to be responsible for responding to every concern of the other people and teams you need to coordinate with.

More important than that, though, is feedback. A team keeps you honest, and keeps you moving forward. It's easy to stagnate working alone, only pushing forward when something bugs *you*. But a team is your first customer. You have an obligation and a commitment to do good work every day, because the team needs it. With a team, what you do matters between releases and under the surface, where the customer will never see.

A team makes you better than you need to be by yourself.

Take One Step

Do a little bit.

Just a little. You don't have to stay up late. Just commit an empty file. Fix a typo. Write one test. Create a branch.

Do one, small thing to change your state. Lay the next stepping stone in the path. You don't need to finish, as long as what you leave there points the direction you want to go.

With side projects often comes guilt. Many days you're tempted to do nothing because what you have time for won't be "enough". But you don't need to tackle big things every day to make progress. "Enough" velocity is anything that you have time for.

So just do a little, tiny thing. One baby step. Then go to sleep. Take another tomorrow.

Metric Tools

Metrics are useful little tools.

Metrics might not seem useful when you are working alone. When you're working alone, you know how you're doing. You have your hands on the work, all the time. Looking back at the day, or the week, you can feel whether you made progress. Looking ahead at your goal, you can feel whether that progress was enough, or if something needs to change.

Now throw a team into the mix. Your teammates are working and you are not touching it. Even if you are the lead, you don't touch all of it. Not if you're doing it right, anyway. And if you're the project manager, well, you might not even be touching all of your own team every day. Your work also involves touching other teams, whose people and work you aren't touching either.

So you start measuring. Story points, features, bugs opened, bugs closed, test count, lines covered, velocity, backlog size, ideal days, person hours, burndown. The numbers start rolling in. You touch them, and you get a feel for things again. Numbers are moving up and down and you can feel whether they're moving in the right direction and fast enough.

But you're still not touching the work. You're touching the metrics: a set of narrow, fuzzy, possibly staged views of certain aspects of the work. You're touching a tool. Tools wear. Tools break. Tools become obsolete. Sometimes, tools just get in the way.

If the metrics are helping you ship, then the tool is doing its job. But the metrics can't tell you how good the metrics are. Never forget, your job is not to deliver the metrics. Your job is to deliver the deliverable. Your job is to ship the product.

Wash the Dishes

If you've ever worked in an office with a kitchenette, you've witnessed it. Most people are too busy to empty the dishwasher, or load it, so the dirty dishes sit in the sink. Or people are too busy to find and add the detergent, so the dirty dishes sit in the dishwasher. But the dishes do get washed--eventually. There's someone who comes by periodically and checks the situation, and then takes care of it.

Sometimes, this is just the most high-strung or passive-aggressive person in the office. And they'll make sure you hear about it. If they're not that, you probably won't hear about it. You'll just be happy to have clean dishes washed by the flatware elves overnight. With the dishwasher always ready to accept your dirty dishes, and the confidence they'll get washed, you might even be coaxed to take your dish and that glass sitting in the sink, and put them in the washer.

If you pay attention you might notice that the elves are a person. And they often happen to be a leader of some sort. Maybe it's the official or unofficial morale officer. The event planner. The recruiter of volunteers. The person who says "let's go out for lunch!" Or maybe it's the boss. Or the owner.

This is a thing that happens in teams. The dishwasher phenomenon is just the physical manifestation. The underlying pattern is that of "servant leadership." That's an interesting phrase. It can be read both ways. A servant leader is a leader who leads by serving the team's needs. Or a servant leader is a team member who, by serving the team's needs, becomes a de facto leader.

There are a host of activities that are necessary to keep a team happy, and their productivity flowing. Maintaining shared physical spaces is one. Maintaining shared digital spaces is another. Filling in and normalizing the project wiki, adding comments to obscure code incantations, removing old cruft, rounding out the test suite, or just your run-of-the-mill refactoring. These activities are what keep a project sane and functional.

As a lead, if no one is doing this stuff, it's your job to start. If you lead by example people will be a lot more willing to follow that example when asked. Maybe you'll even get lucky and some will pick up the slack on their own initiative. As a lowly team member, even if your skilled contributions are humble, you can find a place of value, impact, even respect, by getting the mess out of everyone else's way.

Do you want to make things better? Do you want to lead? Do you want to have an impact? Start with washing the dishes.

Wait For It... (Or Don't)

If you wait long enough, someone will build that app or service you so desperately wish existed. They will gladly sell you a license or membership, and you can stop wishing.

Why wait? Solve your own problem. Sell the license or membership to those other folks, instead. At minimum you get to stop wishing sooner, since you get to beta test your solution. Probably you also learn a little bit. And at best, you make some money. You can use it to fund your next solution. Or maybe just have a bit more personal security. It's all upside.

The Role of a Database

I have worked on a number of what I would call database-centric applications. A kind way to put this is that these are applications that take full advantage of the features of their chosen DBMS to implement as much as possible of the system. The goal of this is to maintain a monolithic system as long as possible because one system means fewer boundaries. This means fewer degrees of freedom and fewer opportunities for breakage. It also means less integration and change process overhead.

The premise can be attractive, when you have a homogeneous and overstretched workforce. We need something to happen every 5 minutes, but we don't have anyone who knows how to write a console app or a Windows service or a Linux daemon. Fortunately SQL Server has a job system for running periodic T-SQL tasks, so no need to learn anything or hire anyone.

As someone who is comfortable writing console apps and Windows services, I am decidedly uncomfortable writing periodic tasks as SQL jobs unless the goal is solely to do database-y things like project normalized data into a reporting schema or stage data for an ETL pickup. I said as much to a coworker not too long ago and he challenged me on why I wanted nothing but data in the database, when the tools were capable of so much more. Schema can model relationships, constraints, and invariants. Tasks can be scheduled. Stored procedures can encapsulate implementation details. Triggers can be used for eventing.

I had to think a bit in order to put it in terms that weren't dogmatic. I had to consider why I dislike programming inside the database. Here are some of my reasons:

  • The languages and libraries are spartan and hard to work with.
  • While not inevitable, databases are commonly viewed as monolithic from the inside. Subsystem design and responsibility segregation have huge upsides, but are exceedingly rare inside the bubble.
  • Testing and deployment tooling is clunky and slow where it exists at all.
  • External controls are generally unavailable. Everything is an admin interaction, or occasionally an exceedingly clunky script in a proprietary language.

I think you'd be hard-pressed to find someone comfortable both in and out of the database who disagrees with most of these, or who couldn't probably add a bullet of their own to the list.

What this all adds up to is that database-centric processes and applications are harder to build, harder to change, and harder to migrate, than the equivalents built outside the database. This is especially true if the change being considered is to migrate logic out of the database engine. Whereas if you started off building outside the database, the biggest hurdle to moving something into the database, say for performance reasons, is whether the crucial features exist at all inside the database bubble.

The universal constant of programming is that change happens. If the tools and the skills are available, the tools that are better supportive of change should be preferred. And that is why I prefer my databases not to multi-task. I want them to store data. I want them store it in ways that are efficient to update and query in necessary ways. And then I want them to get out of the way.

Blog Stubbing and Refactoring

Sometimes I have an idea for a blog post, but it's not a good time to stub it out. Maybe I don't have my notebook, or I just don't have time to do more than jot a placeholder. Sometimes in those cases I will open a new blog entry and put in a title that summarizes the idea. Later I will come back and start writing under that title, trying to retrace my thoughts and produce the post that was waiting to be born.

More often than not in these cases I ramble for several paragraphs and never quite manage to recapture the sentiment that inspired the title. Eventually I close the editor in frustration, feeling like I failed my muse. I feel like the title was the true name of my post, and I have failed to honor it.

This is probably not a healthy or effective way to write.

I don't follow this cycle with code. Often I will run up against a blob of functionality that I know I don't want inline. I have a rough idea of what the responsibility of that blob is and what parameters are required, so I stub things out. I make an interface, give it a function with the expected signature, then continue on using it, knowing I will go back to implement later.

But often when I come back around to flesh out the stub it turns out I was wrong about what exactly it would do, or--more importantly--what context would be required to accomplish it. But in these cases, I don't just keep laboring under the ill-informed constraints I set before. Instead, I refactor. I change the name. I add the parameters I need. I update the tests for the call site, and the code there as well.

In programming we name the thing to represent what it does. We don't build the thing to fit the name. In programming it is not possible to accomplish something without the necessary preconditions, so if they aren't present we adjust course.

In programming, we must rework.

So, too, in writing.

Culture Is a Garden

Culture is a garden, and your people are gardeners. Every decision, every meeting, every email, every priority, every exception--each is an act of gardening. Every line of code written, or removed. Every test. Every tool. Every deployment. Water this, yank that, prune here, fertilize there. Some idea gets planted or trod down. Some relationship is nurtured or ignored.

You can't create culture, or design it, or decree it. Even as a leader or manager, you can only tend your plot. Occasionally, you can add or remove a gardener, and this can have a big impact. But in the end the culture you grow depends fully and only on what gets planted and nurtured on a daily basis, and what gets stepped on or neglected.

Execution

A thing I have learned over the past couple of years is that it doesn't matter how cool the technical problems are, or how awesome your solutions and processes are, if the business problems and the people problems aren't being addressed. The best tech, under genius orchestration, produces a mere whimper in the dark if the people can't rally behind it or the business can't focus and direct it.

I used to look a bit askance at the axiom that "ideas are cheap, execution is everything" until I realized just how much is encompassed in the term "execution". I always just considered technical execution. As if adding a qualifier to a word ever did anything more than narrow and reduce it. But no, "execution" includes product strategy, marketing, organization, interpersonal dynamics, and much more. There are so many problems in those areas that have both deep and broad impact on the success of the enterprise that the core idea, no matter how rare and genius, pales in comparison.

I used to take the axiom personally, as a repudiation. I can execute. I have the chops. But I'm not an idea man. So where are all these ideas just waiting for me to run off with them? For the axiom to be true I must either be dumb or incompetent. But that completely misses the point. Good ideas aren't necessarily plentiful or easy to produce. But they cost nothing. Whereas execution....

Execution is everything. Not least because execution is everything.

A New Objective

There is a new permanent page here on my website for my current career objective. This started out as a long blurb that used to be crammed into my resume page along with what is now my value proposition, but it deserves its own place. All the parts are much cleaner for it, and it allows the objective to breathe as much as it needs to. The other bits are mostly the same, but the objective has also changed, and I intend to keep it current as time goes on.

I was inspired by a number of things to make this change to a clear, strong, forward-looking statement of intent and aspiration. Recent events, the wisdom my friend @veryfancy links to on Twitter and blogs about, the evolution of my wife's career, a long following of @rands (In Repose), and a few very good books I've read lately. Change can come at any time, and when it does I want to be ready to capitalize on it and ride the wave forward rather than be swept up in the current and dropped off wherever it peters out.

In the long term, technical and business strategy is the problem space I want to get to. And I know that to have the kind of impact I want to have at that level, I need facility in the problem space of people, and teams, and projects. So I want to make sure my next step adds a component in that dimension to my career vector... Hrm. As you can see, I'll always be an engineer at heart. Only, I have realized that I also want to be more than an engineer.

The Right Toolbox

A significant part of maturing as a professional is learning to recognize the right tool for a particular type of job. As a programmer this skill may manifest when choosing between alternative solutions at many different levels of abstraction and granularity.

Some examples include decisions between:

  • a loop versus an iterator block
  • a Windows service versus an scheduled task
  • a message queue versus a web service

The next step of professional maturity requires recognizing that all of the decisions like those above have something in common. They are all custom software solutions. This means that there is another decision which has already been made. This is the decision to use a custom software solution.

Crucial to the process of gaining more trust, responsibility, and impact in your job is developing the skill of recognizing whether the solutions being considered are of the right kind. This usually involves asking a lot of "whys." In the scene below, Alice is a development lead and Bob is the IT director she reports to.

Alice: Why are we splitting this service into two separate services?
Bob: Because it's memory intensive and has been bogging down the rest of the process.
Alice: Can we add RAM to the VM?
Bob: Not without taking it from other VMs that can't spare it.
Alice: What about the physical hardware?
Bob: We have room to expand.
Alice: Can we add RAM to the server, and give it to the VM?
Bob: We can do that once more with this hardware, but if we run out of headroom again, we'll need to procure hardware, and that won't be cheap.... A RAM upgrade will definitely be cheaper than the engineering effort to split the process up. This time. But if we run out of headroom again, the next upgrade will be much more expensive.
Alice: Ok. Then rather than split the service now, we'll do an effort estimate so we can do a real cost comparison next time around.

It can be tough to cultivate the discipline Alice exhibits in this scenario. Alice wants to be a problem solver, and is eager take on more responsibility rather than waving it away. There are a lot of incentives to assume that important analysis has already been done and it's above your pay grade to worry about it. But for that exact reason, taking care in these kinds of situations can have big impact and visibility to the broader business.

What this all boils down to is that to take the next step in professional maturity, you need to consider a broader context. Before you sit down to decide which of your tools is the right one for the job, make sure you're even digging in the right toolbox.

REPL

Sometimes while coding I have an idea of some small construction I want to try that I'm uncertain will work. I could fairly easily determine whether it will work by writing a test and running it, but it's just so small and simple that even that seems like overkill. Often, I just want to know if something will compile, and what the type system will do with it.

I just want to write a little bit of code, run it immediately, see the result, and move on, possibly forgetting about it. I need a REPL. Depending on your primary programming language, you may be familiar with this already. But if you are a C# developer, especially one without a long history in development, this might be new to you.

The term REPL is an acronym that is pronounced like the word "ripple," but with an "E" instead of an "I". The acronym stands for Read, Evaluate, and Print Loop. This is a small app that reads a statement of code, evaluates (or executes) it, and prints the result. It then repeats that process with the next statement or waits for the user to enter another one.

If you work with JavaScript, you might recognize the developer console from your favorite browser as a JavaScript REPL. With node.js, the application that executes programs is itself a REPL, if you run it without passing a starting point JS file. The same goes for Ruby, Python, and a great deal many other modern languages.

With C# and VB.NET, Visual Studio offers a special window called the "Immediate Window" which operates as a REPL while debugging a program. It has access to things in the scope of the breakpoint, and you can also define variables and build up complicated expressions and statements on the fly. Unlike the other REPLs I've listed, Visual Studio's Immediate window has historically been constrained in what it will execute. For example, it won't evaluate anything that involves an anonymous delegate, or lambda. Unfortunately for me, it seems that nearly every time I want a REPL, I'm experimenting with lambdas in some way.

Fear not. A while back Glenn Block, while he was working at Microsoft, started an OSS project called ScriptCS to provide "the missing REPL" for C#. It, too, started off heavily constrained. But after many iterations and a great deal of work, today it's a darn useful tool in a couple crucial ways. Not only is it a full-fledged REPL that has access to the entire C# language and all of the CLR, but it's also an honest-to-goodness scripting system. It will allow you to run C# files as scripts from the command line without pre-building them into assemblies. It has all the convenience of node, ruby, python, or whatever other scripting language you might be familiar with. They accomplished this by using of Roslyn, the .NET "compiler as a service" from Microsoft which will also underlie new versions of Visual Studio.

If you're working with C# on a daily basis, I highly recommend you go grab and install ScriptCS. It is distributed via a package manager called Chocolatey right now, and will hopefully eventually be available via OneGet. (Think NuGet for apps.) I use it at least once per week in its REPL capacity, and I anticipate it coming in handy for build and deployment automation as a script runner. It's great tech. And if you use it and like it, make sure you thank Glenn and the other folks who worked hard to build it.

 

Balancing My Best

I've been a perfectionist for a long time. I've been a parent for far less time. In the overlap, what I have felt like is mostly a pile of inadequacy and failure. I know this isn't really the case. But I definitely feel like I'm not meeting my own expectations for my performance either as a parent or in my profession.

For most of my life, I would hedge my failures by dumping extreme amounts of time and effort into them. If I stayed up all night embellishing my project, hopefully the flaws would be overshadowed by how hard I had worked. Hopefully any criticism of what I got wrong would be balanced against praise for how far above and beyond I had gone. I did my best, which is surely the most anyone could ask. Hopefully the equation would come out with me feeling more satisfaction than failure.

As a normal human adult who has more obligations than time, this drive--this instinct--has not exactly served me well. It skews priorities, both within the effort, and with other obligations. And it's not sustainable. Whether it's my health, or my relationships, or my job performance, something gives, usually sooner than I expect. So that's just another way I'm inadequate. Which drives my perfectionism. The cycle continues.

I have very young daughters, and am starting to see the first inklings of how they handle failure. They are at the stage still where they can become frustrated to the point of tantrum when they aren't able to succeed at something. But this is strictly because at their age they also haven't learned how to give up. Their best is to literally try, and try, and try, and try, until I step in and drag them away screaming. At which point, I can certainly say, "it's okay, you did your best." But will it encourage them to continue to do their best, if it equates to ending heaving and sobbing with the knowledge that they were insufficient to their ambitions?

I find myself considering how I will approach with them the topics of failure and success, effort and perseverance, pride and satisfaction. Surely whatever I tell them, I'd better believe it. And even better, I should model it, because example is strong where advice is weak. So what do I really believe about these things?

There is no learning without failure. I must embrace failure if I want to improve at anything. Failing feels really bad, though, so I must learn how to salvage satisfaction from failure. Not accomplishing what I set out to do doesn't mean I can't be proud of what I did. "Doing my best" does not require destroying myself and my other obligations in sacrifice. Whatever I'm setting myself to do is not all I have to do. I have to be a parent. I have to be a husband. I have to be a son. I have to be a friend, a neighbor, an employee, a leader, a follower.

Doing my best means that I spent my time wisely, and appropriately to the need. I focused on my task and my goal, for the time that I worked at it. Where I lacked certainty, I tried new things. I was reflective of the outcomes. I learned. I used what I had and gave what I could, without stealing from my other obligations.

And maybe most importantly, doing my best doesn't mean that no one can fault me. It just means that I laid a foundation to do better next time.

Slack

Very nearly every company I've worked for seems to have a constitutional aversion to "slack" in the work pipeline for development. Sometimes this manifests as a pile of inward-facing operational development that passes hands every few weeks and never really gets done because as the current developer is urgently re-allocated to a paying client. In consulting, sometimes it just means a stretch without pay when there's nothing to bill on. Sometimes it results in developers being assigned increasingly low-value, low-clarity, or low-interest busywork.

All these things are an extremely poor usage of available developer resources.

These situations seem to originate in a view of engineering manpower as either a cost center (e.g. in an IT department) or a kind of inventory (e.g. in a contracting/consulting firm). For accounting purposes, fine. But that doesn't mean you have to actually treat them that way. I'm not sure if my view is any more valid, but I tend to think of engineering slack as a surplus. We have the developers' time. It's probably paid for. And even if it's not, it's probably bad for morale not to pay for it. Sure you can burn off the excess and get not much more than waste heat out of it. Or, you can invest it, and make back the cost in dividends.

Nearly any company has systems that can benefit from a little more automation, a little more customization, a little more integration. This is what the vast majority of developers in business are doing. Most of them are just doing it because the work is going to actively mitigate an operational cost or support a revenue stream. Sometimes the direct value added isn't worth the cost of the developer's time. But good engineering often pays dividends in indirect value, via force multipliers or ongoing and compounding efficiency.

Not all work is created equal, but if you look carefully, there's probably some benefit to be had from a little extra development time. And the next bit can compound on that. And the next. And the next. Before you know it, your operations could be humming like finely tuned machinery. Or you have an experimental beta feature that could be your next surprise hit. But you'll never know if you keep burning off your excess instead of investing your surplus.

A Case Study in Toy Projects

I have historically had a lot of trouble finding motivation to work on toy projects solely for the benefit of learning things. I tend to have a much easier time if I can find something to build that will have value after I finish it. I view this as a deficiency in myself, though. So I continue to try to work on toy projects as time allows, because I think it is a good way to learn.

One of my more successful instances of self-teaching happened a few years ago. I was trying to learn about this new thing all the hip JavaScript kids were talking about called "promises". It seemed to have some conceptual clarity around it, but there was disagreement on what an API would look like. Multiple standards had been proposed, and one was slightly more popular than the others. So I thought it might be fun to learn about promises by implementing the standard.

And it was! Thankfully the Promises/A+ spec came with a test suite. It required node.js to run it, so I got to learn a little bit about that as well. I spent a few evenings total on the effort, and it was totally worth it. I came away with as deep an understanding of promises (and by extension most other types of "futures") as it is probably possible to have. This prepared me better to make use of promises in real code than any other method of trial-and-error on-the-fly learning could have. 

Here's the end result on GitHub: after.js. The code is just 173 measly lines--far shorter than I expected it to be. It also hosts far more lines of documentation about promises in general and my API specifically. It has a convenient NPM command for running my API through the spec tests. And most satisfying of all it can now serve as a reference implementation for whoever might care to see one. I think it's a great example of the benefits of a toy project done well.

Interviewee Self-Assessment

Evaluating the technical chops of job candidates is difficult. Especially in a "screener" situation where the whole point is to decide if a deeper interview is a waste of time. I haven't done a lot of it, so I'm still developing my technique. Here are a few things that I think I like to do, so far.

As long as the candidate is in the ballpark of someone we could use, I don't like to cut the interview short. There's always the chance that nerves or social awkardness are causing an awful lot of what might appear to be ignorance or confusion.

I like to ask the candidate what technologies (languages, platforms, frameworks) they most enjoy and most dislike, and why. This gives me a peek into how they think about their work and what their expectations are of their tools and platforms. I want to see at least one strong, reasoned opinion in either direction. Not having one is an indication that they either lack experience, or are not in the habit of thinking deeply about their work.

Here's the big one: In order to figure out what questions to ask and how to word and weight them, I also like to ask the candidate to evaluate their skills in a few of the technologies that are relevant to the job they are applying for. Even if they have identified their relative skill levels on their resume, I ask them to put themselves on a 5 point scale. Zero is no experience. One is beginner. 2 is still beginner. 3 is comfortable. 4 is formidable. 5 is expert.

At self-rating of 1, I mostly just want to find out what they've built and what tools they used. Anyone who rates themselves at 2 or 3 is a candidate for expert beginner syndrome. They'll probably grow out of it as they get more experience. I ask questions all over the spectrum to establish what they know and what they don't.

A self-rating of 4 is probably the easiest to interview. A legitimate 4 should have the awareness of self and the space to see that they know a lot, but also have a good conception of where their gaps are. 2s and 3s are more likely to self-label as 4, but they are easy to weed out with a couple challenging questions. Beyond that, I mostly care about how they answer questions, because this candidates value is as much in their ability to communicate about tough problems and solutions as it is in coding and design.

A self-rating of 5 is essentially a challenge. I'm not interested in playing a stumping game. But I do care whether the confidence is earned. Someone who is too willing to rate themselves an expert is dangerous both on their own and on a team. A 5 doesn't need to know everything I can think to ask. But I expect an honest "I don't know" or at least an attempt to verbally walk it through. And instead of confusion and misunderstanding, I expect clarifying questions. Communication and self-awareness are crucial here. Confident wrong answers or unqualified speculations are bad news for a self-proclaimed expert.

What Type of Type Is That?

The .NET runtime has two broad categories that types fall into. There are value types and there are reference types. There are a lot of minor differences and implementation details that distinguish these two categories. Only a couple of differences are relevant to the daily experience of most developers.

Reference Types

A reference type is a type whose instances are copied by reference. This means that when you have an instance of one in a variable, and then you assign that to another variable, both variables point to the same object. Apply changes via the first variable, and you'll see the effects in the second.

public class Point {
    public double X { get; set; }
    public double Y { get; set; }
}

// Elsewhere...
Point p1 = new Point { X = 5.5, Y = 4.5 };
Point p2 = p1;
p1.X = 6.5;
Console.WriteLine(p2.X); // Prints "6.5"

This reference copy happens any time you bind the value to a new variable. Whether that's a private field on an object, a local variable, a function parameter, or a static field on a class. The runtime keeps track of these variables as they float around and doesn't allow the memory holding the actual object to be freed until it is sure that none of the references are in scope of any active objects.

Value Types

A value type is a type whose instances are copied by value. This means that when you have an instance of one in a variable, and then you assign that to another variable, the second variable has a new object, with the value of each property, and which you can change independently of the original value. 

public struct Point {
    public double X { get; set; }
    public double Y { get; set; }
}

// Elsewhere...
Point p1 = new Point { X = 5.5, Y = 4.5 };
Point p2 = p1;
p1.X = 6.5;
Console.WriteLine(p2.X); // Prints "5.5"
Console.WriteLine(p1.X); // Prints "6.5"

Value types can get tricky. The thing to remember is that this policy goes only one level deep. The properties of a value type have their own copy type, and that will be how they get copied to the new containing object.

public class Point {
    public double X { get; set; }
    public double Y { get; set; }
}

public struct Line {
    public Point Start { get; set; }
    public Point End { get; set; }
}

// Elsewhere...
Point p1 = new Point { X = 0, Y = 0 };
Point p2 = new Point { X = 3, Y = 3 };
Line l1 = { Start = P1, End = P2 };
Line l2 = l1;
p1.X = 1;

Console.WriteLine(p1.Start.X); // Prints "1"
Console.WriteLine(l1.Start.X); // Prints "1"
Console.WriteLine(l2.Start.X); // Prints "1"

Here we see that the changes we make to the reference type instances are retained across the value types, because it's only the bit of information that points at the reference type that is duplicated, not the object that's pointed to.

Memory

The last thing we should talk about is memory. Unfortunately, this bit is complicated despite most often being inconsequential. But it's a question that a prickly interviewer might decide to quiz you on if you make the mistake of claiming to be an expert.

You might guess that based on this difference in copying behaviors that passing around complex value types would be computationally expensive. It is. And potentially memory-consuming as well. Every new reference is a new variable with new copies of its value-typed properties. Instances also tend to be short-lived, though, so you have to work to actually keep the memory filled with value types. Unless you box them.

"Boxing" is what happens when you assign a value type to an object variable. The value type gets wrapped into an object, which does not get copied when you assign it to other variables. This means that you can end up with very long lived value types, with lots of references to them, if you keep them in an object variable. Fortunately, you're not allowed to modify these values without assigning them back to a value typed variable first.

public struct Id {
    public int Value { get; set; }
}

Id id1 = new Id { Value = 5 };
Object id2 = id1;
Object id3 = id2;

Console.WriteLine(id1.Equals(id2)); // Prints "false"
Console.WriteLine(id2.Equals(id3)); // Prints "true"
((Id)id2).Value = 6; // Compiler error

Folks will often talk about stack and heap when asked about the differences between value types and reference types, because stack allocation is way faster. But value types are only guaranteed to be stored on the stack when they are unboxed local variables that aren't used in certain ways. The decision whether to do so in other cases is often not dictated by the CLR spec, so depending on the platform it might or might not do so in a given situation. In short, it's not worth thinking about unless you are bumping into out of memory errors. And even then, there are almost certainly more permanent wins to be had than by worrying about the whereabouts of your local variables.

Trust Those Who Come After

I have at times in my past been called a very "conservative" developer. I think that title fits in some ways. I don't like to do something unless I have some idea what the impact will be. I don't like to commit to a design until I have some idea whether it is "good" or not, or how it feels to consume the API or work within the framework.

And I used to believe very strongly in designing things such that they were hard to misuse. This was so important to me that I would even compromise ease of proper use if it meant that it would create a barrier of effort in the way of using something in a way that I considered "inappropriate".

I once built a fluent API for defining object graph projections in .NET. While designing the API, I spent a lot of time making sure there was only one way to use it, and you would know very quickly if you were doing something that I didn't plan for. Ideally, it wouldn't compile, but I would settle for it blowing up. I also took great care to ensure that you always had an graceful retrograde option when the framework couldn't do exactly what you needed. But that didn't matter.

Once the framework got into other peoples' hands I realized fairly quickly that all this care had been a tremendous waste of time. The framework was supposed to be a force multiplier for the build teams at my company, but what happened was very different. Because the API had to be used in a very particular way, developers were confused when they couldn't find the right sequence of commands. When what I considered to be the perfect structure didn't occur to them, they assumed their situation wasn't supported. 

I gave my fellow developers a finicky tool that the practice leads told them it was fast and easy and they needed to use it. So when it wasn't clear how to do so, they just stopped and raised their hand, rather than doing what a developer is paid to do: solve problems. By trying to protect the other developers from themselves, I had actually taught them to be helpless. And the ones that didn't go that route just totally side-stepped or subverted the tools.

All this came about because I didn't trust the people who would use or maintain my software after I was gone. I thought I needed to make sure that it was hard or impossible to do what I considered to be unwise things. In reality all I did was remove degrees of freedom and discourage learning and problem solving.

We are developers. Our reason for being is to solve problems. Our mode of professional advancement is to solve harder, broader, more impactful problems. If I can't trust other developers at least to learn from painful design decisions, then why are they even in this business, and what business do I have trying to lead them?