DaedTech

Stories about Software

By

YAGNI: YAGNI

A while back, I wrote a post in which I talked about offering too much code and too many options to other potential users of your code. A discussion emerged in the comments roughly surrounding the merits of the design heuristic affectionately known as “YAGNI,” or “you ain’t gonna need it.”

In a vacuum, this aphorism seems to be the kind of advice you give a chronic hoarder: “come on, just throw out 12 of those combs — it’s not like you’re going to need them.” So what does it mean, exactly, in the context of programming, and is it good advice in general? After all, if you applied the advice that you’d give a hoarder to someone who wasn’t, you might be telling them to throw out their only comb, which wouldn’t make a lot of sense.

The Motivation for YAGNI

As best I understand, the YAGNI principle is one of the tenets of Extreme Programming (XP), an agile development approach that emerged in the 1990s and stood in stark contrast to the so-called “waterfall” or big-design-up-front approach to software projects. One of the core principles of the (generally unsuccessful) waterfall approach is a quixotic attempt to figure out just about every detail of the code to be written before writing it, and then, afterward, to actually write the code.

I personally believe this silliness is the result of misguided attempts to mimic the behavior of an Industrial Revolution-style assembly line in which the engineers (generally software architects) do all the thinking and planning so that the mindless drones putting the pieces together (developers) don’t have to hurt their brains thinking. Obviously, in an industry of knowledge workers, this is silly … but I digress.

YAGNI as a concept seems well suited to address the problematic tendency of waterfall development to generate massive amounts of useless code and other artifacts. Instead, with YAGNI (and other agile principles) you deliver features early, using the simplest possible implementation, and then you add more and refactor as you go.

YAGNI on a Smaller Scale

But YAGNI also has a smaller scale design component as well, which is evident in my earlier post. Some developers have a tendency to code up classes and add extra methods that they think might be useful to themselves or others at some later point in time. This is often referred to as “gold plating,” and I think the behavior is based largely on the fact that this is often a good idea in day-to-day life.

“As long as I’m changing the lightbulb in this ceiling lamp, I may as well dust and do a little cleaning while I’m up here.”

Or perhaps:

“as long as I’m cooking tonight, I might as well make extra so that I have leftovers and can save time tomorrow.”

But the devil is in the details with speculative value propositions. In the first example, clear value is being provided with the extra task (cleaning). In the second, the value is speculative, but the odds of realization are high and the marginal cost is low. If you’re going to the effort of making tuna casserole, scaling up the ingredients is negligible in terms of effort and provides a very likely time savings tomorrow.

But doesn’t that apply to code? I mean, adding that GetFoo() method will take only a second and it might be useful later.

Well, consider that the planning lifespan for code is different than casserole. With the casserole, you have a definite timeframe in mind — some time in the next few nights — whereas with code, you do not. The timeframe is “down the line.” In the meantime, that code sits in the fridge of your application, aging poorly as people forget its intended purpose.

You wind up with a code-fridge full of containers with goop in them that doesn’t exactly smell bad, but isn’t labeled with a date and you aren’t sure of its origin. And I don’t know about you, but if I were to encounter a situation like that, the reaction would be “I don’t feel like risking it, I’m ordering Chinese.” And, “nope, I’m out of here,” isn’t a good feeling to have when you open your application’s code.

Does YAGNI Always Apply?

So there is an overriding design principle YAGNI, and there is a more localized version — both of which seem to be reactions toward a tendency to comically over-plan and a tendency to gold plate locally, respectively. But does this advice always hold up, reactions notwithstanding?

I’m not so sure. I mean, let’s say that you’re starting on a new .NET web application and you need to test that it displays “hello world.” The simplest thing that could work is F5 and inspection. But now let’s say that you have to give it to a tester. The simplest thing is to take the compiled output and manually copy it to a server. That’s certainly simpler than setting up some kind of automated publish or continuous deployment scenario. And now you’re in sort of a loop, because at what point is this XCopy deploy ever not going to be the simplest thing that could work for deploying? What’s the tipping point?

Getting Away from Sloganeering

Now I’m sure that someone is primed to comment that it’s just a matter of how you define requirements and how stringent you are about quality. “Well, it’s the simplest thing that could possibly work but keeping the overall goals in mind of the project and assuming a baseline of quality and” — gotta cut you off there, because that’s not YAGNI. That’s WITSTTCPWBKTOGIMOTPAAABOQA, and that’s a mouthful.

It’s a mouthful because there’s suddenly nuance.

The software development world is full of metaphorical hoarders, to go back to the theme of the first paragraph. They’re serial planners and pleasers that want a fridge full of every imaginable code casserole they or their guests might ever need. YAGNI is a great mantra for these people and for snapping them out of it. When you go sit to pair or work with someone who asks you for advice on some method, it’s a good reality check to say, “dude, YAGNI, so stop writing it.”

But once that person gets YAGNI — really groks it by adopting a practice like TDD or by developing a knack for getting things working and refactoring to refinement — there are diminishing returns to this piece of advice. While it might still occasionally serve as a reminder/wake-up call, it starts to become a possibly counterproductive oversimplification. I’d say that this is a great, pithy phrase to tack up on the wall of your shop as a reminder, especially if there’s been an over-planning and gold-plating trend historically. But that if you do that, beware local maxima.

By

Designs Don’t Emerge

I read a blog post recently from Gene Hughson that made me feel a little like ranting. It wasn’t anything he said — I really like his post. It reminded me of some discussion that had followed in my post about trying too hard to please with your code. Gene had taken a nuanced stand against the canned wisdom of “YAGNI.” I vowed in the comments to make a post about YAGNI as an aphorism, and that’s still in the works, but here is something tangentially related. Now, as then, I agree with Gene that you ignore situational nuance at your peril.

But let’s talk some seriously divisive politics and philosophy first. I’m talking about the idea of creationism/intelligent design versus evolutionary theory and natural selection. The former conceives of the life in our world as the deliberate work of an intelligent being. The latter conceives of it as an ongoing process of change governed by chance and coincidence. In the context of this debate, there is either some intelligent force guiding things or there isn’t, and the debate is often framed as one of omnipotent, centralized planning versus incremental, steady improvement via dumb process and chance. The reason I bring this up isn’t to weigh in on this or turn the blog into a political soapbox. Rather, I want to point out a dichotomy that’s ingrained in our collective conversation in the USA and perhaps beyond that (though I think that the creationist side of the debate is almost exclusively an American one). There is either some kind of central master planner, or there is simply the vagaries of chance.

I think this idea works its way into a lot of discussions that talk about “emergent design” and “big up front design,” which in the same way puts forth a pretty serious false dichotomy. This is most likely due, in no small part, to the key words “design,” “emergent” and especially “evolution” — words that frame the coding discussion. It turns into a blueprint for silly strawman arguments: “Big design” proponents scoff and say things like, “oh yeah, your architecture will just figure itself out magically” while misguided practitioners of agile methodologies (perhaps “no design” proponents) accuse their opponents of living in a coding universe lacking free will — one in which every decision, however small, must be pre-made.

But I think the word “emergent,” rather than “evolution” or “design,” is the most insidious in terms of skewing the discussion. It’s insidious because detractors are going to think that agile shops view design as something that just kind of winks into existence like some kind of friendly guardian angel, and that’s the wrong idea about agile development. But it’s also insidious because of how its practitioners view it: “Don’t worry, a good design will emerge from this work-in-progress at some point because we’re SOLID and DRY and we practice YAGNI.”

Now, I’m not going for a “both extremes are wrong and the middle is the way to go” kind of argument (absent any other reasoning, this is called middle ground fallacy). The word “emergent” itself is all wrong. Good design doesn’t ’emerge’ like a welcome ray of sunshine on a cloudy day. It comes coughing, sputtering, screaming and grunting from the mud, like a drowning man being pulled from quicksand, and the effort of dragging it laboriously out leaves you exhausted.

DragFromTheMud

The big-design-up-front (BDUF) types are wrong because of the essential fallacy that all contingencies can be accounted for. It works out alright for God in the evolution-creation debate context because of the whole omniscient thing. But, unfortunately, it turns out that omniscience and divinity are not core competencies for most software architects. The no-design-up-front (NDUF) people get it wrong because they fail to understand how messy and laborious an activity design really is. In a way, they both get it wrong for the same basic reason. To continue with the Judeo-Christian theme of this post, both of these types fail to understand that software projects are born with original sin.

They don’t start out beautifully and fall from grace, as the BDUF folks would have you believe, and they don’t start out beautifully and just continue that way, emerging as needed, as the NDUF folks would have you believe. They start out badly (after all, “non-functional” and “non-existent” aren’t words which describe great software) and have to be wrangled to acceptability through careful, intelligent and practiced maintenance. Good design is hard. But continuously knowing the next, feasible, incremental step toward a better design at absolutely any point in a piece of software’s life — that’s really hard. That takes deliberate practice, debate, foresight, adaptability, diligence, and a lot of reading and research. It doesn’t just kinda ’emerge.’

If you’re waiting on me to come to a conclusion where I give you a score from one through ten on the NDUF to BDUF scale (and it’s obviously five, right?), you’re going to be disappointed with this post. How much design should you do up front? Dude, I have no idea. Are you building a lunar rover? Probably a lot, then, because the Sea of Tranquility is a pretty unresponsive product owner. Are you cobbling together a minimum viable product and your hardware and business requirements may pivot at any moment? Well, probably not much. I can’t settle your design decisions and timing for you with acronyms or aphorisms. But what I can tell you is that to be a successful architect, you need to have a powerful grasp on how to take any design and make it slightly better this week, slightly better than that next week, and so on, ad infinitum. You have to do all of that while not catastrophically breaking things, keeping developers productive, and keeping stakeholders happy. And you don’t do that “up-front” or “ex post facto” — you do it always.

By

Understanding Degrees of Code Flexibility

In some projects I’ve been managing of late, I’ve noticed a continuous question cropping up: How flexible should we make the different parts of the system? I’m currently working with a bright crew of people, so they’re picking up on this quickly, but I thought I’d do a bit of a write-up to help the process along. And, as long as I’m doing write-ups like this, I might as well post them.

In discussions of software, there is a large issue that gets lost in the shuffle. You frequently hear people argue the merits of different styles of or approaches to programming. Unit testing or not? TDD? IoC or inline new? What’s the appropriate size of a method? ORM or inline SQL? SQL at all or NoSQL? You get the idea. But one thing that I find is often glossed over is the idea of system changes as a function of ease of making those changes. In other words, if a user comes a-hollering and says, “I want, nay, demand the ability to do X,” how hard is it to make that happen and to verify the results?

And by “hard,” I don’t mean, “do you write code for a day or for three weeks?” I mean, “what do the changes look like in terms of risk and deliverables?” In other words, can you make that happen by changing a configuration file or does it require code changes? Will you need to re-deploy or can you somehow patch on the fly through a plugin architecture? And is it testable? Can you verify 99% of the changes by swapping out a configuration setting, or do you have radically different production and test setups?

So I’m going to define some concepts to flesh out an idea. This isn’t exactly a formalized theory or anything. It’s rather just a working lexicon of how I think about my application. This is a scale of system flexibility for a given future change. Or, put another way, here is a way of assessing how much effort on the part of the entire development/operations group doing X for the aforementioned user will be, from least to most significant.

  1. Users can do it themselves.
  2. An IT-level change is required (e.g. changing a config file, swapping out images, etc.)
  3. An architect/dev change is required to configuration (e.g. XML for an IoC container)
  4. A non-compiled source code change is required (e.g. you update the markup for a site but not the underlying code)
  5. An Open/Closed Principle Compliant source code change is required (basically adding new code).
  6. A localized tweak to existing code is required.
  7. A substantial change to existing code is required spanning various modules.

Now when considering this list, it makes sense to assess your change-set in terms of the furthest down thing it requires. So maybe you need to change a logo on your website, which is easy, but you have an unwieldy switch statement somewhere that chooses swaps it out in certain circumstances that you now need to change. This is going to be probably a 6 rather than a 2. A given change is going to be as rigid as the most rigid link in the chain, so to speak.

Here are the kinds of changes described in more detail.

Users do it themselves.

There are some sorts of changes to the system that need not involve anyone from your team/staff/company. These are things that users do, through the application. An obvious example is a banking or commerce website in which users can change their passwords. “Password” has nothing to do with the business logic of commerce, so this is functionally a meta-piece of administrativa that you’re entrusting to users.

A dev-ops or operations person changes meta-data in production.

This is something that you (hopefully) don’t trust a user to do but that doesn’t require any actual knowledge of the code base. A good example of this might be a desktop application that has an XML configuration file (or an INI file, if you’re willing to show your age with me). This file might be modified to have the application point to a different database or log to a different file or something. This is not something the average user could or should do, but it’s a relatively lightweight change in that it requires only minimal training and no re-deployment of any kind.

An architect or developer changes meta-data in production.

The next step down in operational rigidity is a meta-data change that cannot be performed without an understanding of the code base. The best example here is the configuration of an IoC Container that has been extracted to XML. Figuring out which service is used by which ViewModel is not something that anyone without sophisticated knowledge of your source code can do, but, on the plus side, it’s still just a change to a setting in production.

Someone changes source code that does not involve re-compilation

This is what happens when a developer logs on to the web server and stars editing HTML or CSS or even a server side script like PHP in the files. This is really not a good idea for a variety of reasons, but it is possible and may be something you have to do in a pinch, so it’s worth noting.

Someone makes a code change that more or less just involves adding code and very little modification

Now we’re down deep enough into rigid territory that a new deployment/install is required in order to push the changes. From here on, this cannot be done in production, so if you’re doing this you’re going to incur all of the overhead of building/running automated tests (hopefully), quality assurance, creating a deployment and deploying (your process may vary). But on the plus side, this is pretty low risk as far as code changes go. Adding things is generally both easy to verify for correctness of functionality and unlikely to mess up existing code.

Someone makes a code change that involves lightweight changes to existing code.

The most common scenario here is probably a bug fix, though it may be new functionality too, depending on how flexible your architecture is. This is a higher risk proposition than adding new source code because you’re now creating a risk of regressions. You still have all of the same considerations about build and deployment, but the risk of problems is higher. Your testing and verification overhead should also be higher. This is a heavier change.

Significant work on the code base is required.

This is what happens when a code base that models a company and implements Office as a singleton suddenly needs to accommodate the new office you opened up in Texas. You designed your code under the assumption that there could only ever be one office location, and you were right about that — right up until you weren’t. Oops. Now things get ugly because management comes to you and says, “we’re opening a new office, so the application is going to need to handle that” and your response is, “that’s completely out of the question. Why, even the thought is preposterous!” You tell them that substantial rework is going to be required.

Making Sense of your Options

So why did I list all of these out? Well, I did it because I feel it’s important to know what your options are when you’re designing and that it’s important to anticipate, rather than react knee-jerk style, to changes. If you sit down before you start putting together a code base, think about what users might want and then go through the exercise of figuring out which number of change would be required. If you find likely changes that would be 6s or 7s (and there is certainly a sliding scale at the 6-7 level), that’s a problem that you should start addressing now. If you find extremely unlikely changes that are 1s and 2s, that’s not necessarily a problem. But it is a point of design flexibility that you could get away from, and it may be that you have pointless abstractions and complexity (though I’d be a lot more hesitant to introduce rigidity because you think flexibility is unneeded than vice-versa).

Another interesting exercise is to consider categories of these things. For instance, 5-7 are all things that require compiled code changes and 1-4 are all things that do not. This is an interesting way to split up your functionality, and it’s obviously the backbone of this post, but you can divvy these up in other ways as well. For instance, if you’re writing software that for some reason has no field or ops support, then 1, 5, 6 and 7 are your options, and 2-4 are basically non-starters. Or, if you’re considering things in which source control is an issue, then 4-7 are in a category and 1-3 are in a different category (most likely, as I’d think that you’d favor generating meta-data files as part of your deployment rather than source controlling different configurations).

None of this is even remotely comprehensive, but my goal here is really just to encourage people to understand at design time the difficulty of changing something at production time. It seems quite often to be the case that people don’t really think about this, and simply because no one has ever pointed it out to them. Your mileage may vary on the number of categories in the list and your preference for certain options, but at the core of this is a basic and incredibly important idea: you should always play “what if” when it comes to changes that users might request and understand how much of a headache it will be for you if the “what if” comes true. Oh, and also try to minimize the number of headaches. But hopefully that goes without saying.

By

Throw Out Your Code

Weird as it is, here’s human nature at work. Let’s say that I have a cheeseburger and you’re hungry. I tell you that I’ll sell you the cheeseburger for $10. You say, “pff, no way — too expensive.” Oh well, I eat the cheeseburger and call it a day. But I’ve learned my lesson. The next day at lunch, to execute my master cheeseburger selling plan, I slide the cheeseburger over in front of you and tell you that you can have it: “you can have this cheeseburger…” Just as you’re about to take a bite, however, I cruelly say “…for ten dollars!” You grumble, get out your wallet and hand me a ten dollar bill.

This is called “The Endowment Effect,” and it’s a human cognitive bias that causes us to value what we have disproportionately. I blogged about it here previously in the context of why we think that our code is so good we should SPAM it all over the place with control-V. But even if you don’t do that (and, really, please don’t do that), you still probably get overly attached to your code. I do. After all, we, as humans, have a hard time defying our own natural instincts.

I’m certainly no anthropologist, but I suspect that our ancestry as nervous, opportunistic scavengers on the African Serengeti has everything to do with this. Going and snatching a morsel that a hungry lion is eyeing is a pretty bad idea. But if you already have the morsel, what the hey, you might as well take it with you as you run away. But, however we’re wired, we’re capable of learning and conditioning our own responses. After all, we don’t go bolting away from the deli counter after the guy there hands us our two pounds of salmon. We’ve learned that this is a consequence-free transaction.

It’s time to teach yourself that lesson as it relates to your code. It’s not so much that deleting functional code is consequence free (it isn’t). But deleting it isn’t nearly as big of a deal as you probably think it is. When it comes to code that you’ve spent two weeks writing, I’m pretty willing to bet that if you trashed it all and started from scratch (no peeking at source control history), you could rewrite it all in about two days. If that sounds crazy, ask yourself whether the majority of the time you spend programming is spent furiously typing as if you were taking a words-per-minute test or if most of it is spent drawing things on scratch-paper, squinting at your screen, pushing code around unit tests, muttering to yourself, and tapping a pen on your desk. I’m betting it’s the latter, and, when you rewrite, it’s activities from the latter that you don’t do nearly as much. You’ve already blazed a trail for yourself and now you’re just breezing through for a second trip.

Write some code and throw it out. Do a code kata with the stipulation that the code is deleted, never to be recovered. Then try it again the next day and the day after that. Or create a copy of your production code at work, engage in some massive, high-risk, high-wire-act refactoring, and then just delete it. With either of these things, I promise you that you’ll learn a lot about efficient coding and your code base, respectively. But you’ll also learn a subtle lesson: the value you’re creating as you code can be found more in the knowledge and experience you’re acquiring as you do it than the bits sitting in source control.

Practice throwing out your code so that you stop neurotically overvaluing it. Practice throwing out your code because it’ll probably happen by accident at some point anyway. Practice throwing out your code because your first crack at things usually kind of sucks. And practice throwing out your code because end users and the world are cruel, and not everything that you write is going to make it gift-wrapped into production. The more you learn to let go, the happier and more productive you’re going to be as a programmer.

By

Easy Deployment: the Alpha and the Omega

A bit of housekeeping…you may have noticed that the social media buttons look a bit different if you’re not accessing through RSS. The old plugin that I was using seems not to be supported anymore, and the Facebook button vanished for a bit. I tried out a replacement and liked it, so I kept it. My thanks to Active Bits for the Social Sharing Toolkit.

Wrong But Fast

There’s a pretty good chance that your deployment process is both too painful and not painful enough. But before I return to that cryptic statement, let me talk a bit about something I’ve observed in developers — especially ones that are newer to the industry. Here’s an example of a series of exchanges that has become pretty familiar to me:

User: It would be nice if the profile screen had a way I could change my password.

Young Buck: That’ll take like, literally, two seconds. I’ll be right back!

(Fifteen minutes later)

Young Buck: Okay, I pushed it out to the server, so you can change your password now.

User: Wow! It’s live already? That’s really cool! Thank you! Let me try it out. Let’s see… oh, hmmm. When I try to log in it looks like it crashes.

Young Buck: That doesn’t seem right. I mean, the only thing I changed was…. oh! I know exactly what happened. Give me three minutes, and I’ll be back.

(30 minutes later)

Young Buck: Alright, should be good.

User: Well, I can get in again, and there’s the change password button, but when I click, nothing happens.

Young Buck: That’s not possible. You must have forgotten to clear your cache. Unless…wait, I think I know what’s going on here. Give me 10 minutes.

User: Uh, tell ya what — just don’t worry about it. Maybe I’ll try it again next week.

Young Buck: (Crestfallen) But it’ll only take 10 minutes and I know exactly what the problem is.

User: (With pity) I’m sure you do, but I’ve got a lot of things to get done today.

This is not professional behavior. Imagine if you took your car in for repairs somewhere and things went this way. You’d probably have the Better Business Bureau on the phone shortly or at least be headed to another mechanic. The young buck is sloppy because he’s brash and arrogant. Right?

Or does it just come off that way a little because of how sure he seems when really he’s just eager to please the user and prove himself? Personally, I find that in the overwhelming majority of cases this is really what’s going on. People often get into programming because they like solving problems. And many programmers were some of the smartest kids in their classes growing up — the ones waving their hands frantically, demanding that the teacher give them a chance to show that they know the answer.

FranticStudent

As entry level programmers, the school game has been all that they know. It’s a balance between rushing to get the right answer (teacher calling on students, timed exams, cramming in homework, etc.) and getting answers right, with the former often winning. In college, most programming assignments are evaluated by programs that allow you to submit your code as often as you want and get immediate feedback. There are also office hours, so students who visit professors and TAs the most frequently and submit the most work tend to get the best grades and the most feedback. Computer Science students are used to a paradigm where ideas are valued over execution.

Welcome to the Real World, Grasshopper

In the professional world, however, execution reigns supreme. Ideas are cheap. You may be able to rattle off the quick-sort algorithm in pseudo-code faster than anyone around you, but that’s not going to win you any startup capital. With software, even the intellectual property system (USPTO, anyway) is a joke seemingly designed to let Apple, Microsoft and Google endlessly sue each other and occasionally to swat little guys like bugs. Having the idea first and/or quickly is not as important as getting the idea right in the end.

It takes people some time to learn this upon entering the work force, and exchanges like the one I mention above are common. Users more or less say, “Go away and come back to me when you have something that makes my life better and not a second before. I don’t care if you thought of it in five seconds or if it took you five minutes or if it would have taken you five minutes except that the database was actually not normalized to BCNF and blah, blah, blah. Whatever.” Some people figure this out quickly, and some never figure it out. But whatever the speed may be and however much your group may or may not have come to terms with this, there needs to be structure in place to stop the madness.

In other words, a lot of the developers in your group are going to be eager to please. This is especially true if they regularly interact with their users. There is going to be pressure on them to say things like, “sure, I’ll have that for you in 10 minutes.” But they need not to say things like that. If they can say things like that and they can successfully (attempt to) make them happen, your deployment process is not painful enough.

Hurts so Good

Deployment is not to be taken lightly, especially if there is a release and the users are going to be seeing the result of the work. If you can deploy effortlessly in minutes, there’s a very good chance your process is not painful enough. The situation I’ve been describing above suffers from this very problem — it’s too easy for eager crowd-pleasers to deploy and thus it’s too easy for them to depend on users to be their fast feedback mechanism.

Developers are smart and often opportunistic. Getting fast feedback is extremely important to them, and they’ll naturally seek out ways to procure it. If you let them, they’ll use end users as their feedback mechanism (and, in a tone-deaf sense that ignores end-user perception, this is actually optimal), but you can’t permit this. Rather than following that path of least resistance, or at least familiarity from school/hobbyist days, you need to choke off that path and force them to carve a new riverbed. You need to make deployment more painful.

Now there’s “antiseptic on a cut” painful and “shark gnawing on your leg” painful. You want to gun for the former. A lot of deployment processes that enable developers to SPAM end users with non-functional updates are the product of amateur hour: xcopying files to the server, putting an executable on a share drive, zipping things up and emailing them, etc. These things tend to be both easy to do and easy to botch, so simply setting policies in place that prevent developers from doing them is antiseptic on the cut.

Deployments and especially releases to end users need to follow some process, and coding is simply one stop along the way — not the entirety of it. Ideally, there should be the coding and then developer testing, but from there, automated unit and integration tests, code reviews, static analysis, exploratory/manual testing by QA and observation by a UX group can all be part of the mix. These good practices serve to improve quality in and of themselves, but they also serve to prevent spurious, sloppy releases. If you know you can make a change in five minutes and have it in front of a user in six minutes, you’re going to do that. But if you know that you can make the change in five minutes and it’ll be days of going through the entire release process before you hear back from the end user, you’ll start finding other ways to get fast feedback (such as running unit tests, asking other developers to take a look and working closely with QA). By making deployment more painful you ensure that a lot more care goes into it.

But Hurts Too Much

If you’ve been reading along and grinding your teeth in objection to my premise that your deployment process needs to hurt more, I understand. Things shouldn’t be painful, and deliberately hurting yourself is a form of madness. If your deployment process hurts, it’s too painful, even though I just told you it’s not painful enough. But you have to prevent pain in the right way. Xcopy deployment is like being a boxer addicted to morphine — your process is horrible but pain free. Now imagine that you realize that being addicted to drugs is a problem, so you cut back and start feeling the pain each time a professional puncher unloads on you. In one sense, you’re feeling too much pain because it hurts to get punched in the face, but at the same time you’re not feeling enough pain because the amount you’re feeling isn’t causing you to consider another vocation.

The analogy here may not line up exactly, but the idea is similar. A lot of development and deployment processes are problematically painful in that they’re error-prone and difficult, but not painful enough in that they don’t prevent over-eager deploys and bad decisions. The solution? Get off the junk and stop letting yourself get pummeled by human wrecking balls. Or, in software terms, have a process that makes bad deployments hard and good ones very easy.

Now, getting to this release nirvana is not, itself, simple, but life is simple once you get there. The path to it generally involves a lot of automated testing and good planning. It involves a predictable release cadence, such as a sprint, and a commitment to following the process, not making exceptions nor cramming things in at the 11th hour or pushing back release dates. It involves continuous integration rather than periodic, nightmarish feature branch integrations. It involves a resistance to patching frantically when you make mistakes (you’ll learn a valuable lesson for next time). It involves a simple, fully-automated, easily repeatable build process. It involves a single click/button push deployment process. Summed up, it means that every time someone on your team checks in code, a series of automated tests and checks ensure that the code would be a good candidate for production or it is rejected from checkin until it would be. It means that your code could be shipped with a reasonably high degree of confidence on every checkin and that whether or not to actually ship is a business decision — not a technical one.

I encourage you to take a look at your build process and deployment process. Is it easy because Jim in accounting could do it? If so, it’s too easy and it’s definitely causing you problems. Is it hard enough that you do a lot of checking beforehand because you won’t want to do it again if things go wrong? That’s an improvement because it forces your hand for early testing and vetting, but it’s still a time-wasting problem. First, think about putting obstacles in place to guard against careless deployment, then think about refining those obstacles into process-helping practices, and finally, think about smoothing over the obstacles in the form of complete automation, leaving nothing but good, easy process.