DaedTech

Stories about Software

By

Complain-Bragging Your Way to the Top

First, I’d like to make a brief aside to re-introduce Amanda Muledy (@eekitsabug) who, in addition to editing and writing, is also an artist.  Lest you think that I have a single iota of drawing talent, any sketches that you see as post illustrations are her work, done specifically for the Daedtech blog.  Her drawings are certainly better than me scrounging images from Wikimedia Commons public domain, so I’m going to be using those as graphics whenever possible.

Mo Money, Mo Problems

You know what I hate? I hate it when I get caviar stains all over the leather in my brand-new, fully-loaded Porsche SUV. It’s even worse when it happens because I was distracted by all of the people emailing me to ask if I would review their code or be part of their startup or maybe just for my autograph. That really sucks.

FatCatOkay, so maybe none of those things applies to me, but you know what does? Being party to people couching shameless bragging and self promotion as complaints and impositions. I hate being rich and popular because blah-blah-blah-whatever-this-part-doesn’t-matter. In case you missed it, I’m rich and I’m popular.

Surely you know someone that engages in this sort of thing. It’d be nearly impossible not to because I think just about everyone does it from time to time and to some degree or another. Most people have the interpersonal acumen to realize that simply blurting out flattering facts about oneself is somewhat distasteful. However, finding some other context in which to mention those same facts is likely to alleviate some of that distaste.  And complaining somehow seems like an easy and immediate pretext for this alternate context. (The best way would probably be to get some sympathetic plant to lob softballs at you, or else to simply wait for someone else to mention your achievements, but those are elaborate and unreliable, respectively.)

When people complain-brag, it’s most naturally in regards to some subject with which both speaker and audience are familiar. In the weightroom, a weightlifter may complain-brag about how sore he is from bench-pressing 320 pounds yesterday. In the college world, a kid may complain-brag about how badly hungover he is from pounding twenty-eight beers last night. Some subjects, such as money, popularity, and achievement, are near universal, while others, like the examples in this paragraph, are domain specific. Another domain-specific and curious example of complain-bragging that I observe quite frequently is corporate complain-bragging (or, its more innocuous cousin, excuse-bragging).

Mo Meetings, Mo Problems

Think about the following phrases you probably hear (or utter) with regularity:

  1. “Man, it’s so hard to keep up with all of my email” == “A lot of people want or need to talk to me.”
  2. “Ugh, I’ve had so many meetings today that I haven’t been able to get a thing done.” == “I’m important enough to get included on a lot of meeting rosters.”
  3. “Sorry I’m late, but my 10:00 ran way over” == “I’m not really sorry that I’m late and I want you to know that I’m important enough to have back-to-back meetings and to keep you waiting.”
  4. “I have so many code review requests that I won’t get to write code for weeks!” == “I’m the gatekeeper, Plebe, and don’t you forget it.”

There’s an interesting power dynamic in the corporate world that came of age during the heyday of command-and-control style management via intimidation. That power dynamic is one in which managers sit in the seats of power and makers work at their behest. (For a primer on the manager/maker terminology and background on how they interact, see this wonderful post by Paul Graham.) Makers are the peons who work all day, contributing to the bottom line: factory workers, engineers, programmers, accountants… even salesmen, to an extent. Managers are the overhead personnel that supervise them. Makers spend their days making things and managers spend their days walking from one meeting room to the next, greasing the skids of communication, managing egos, adjudicating personnel decisions, and navigating company realpolitik minefields. Managers also make money, get offices, and hear “how high” when they tell the peons to jump.

It’s traditionally an enviable position, being manager, which leads many people to envy it.  And, in true fake-it-till-you-make-it style, this envy leads those same people to emulate it. So aspiring mid-level managers begin to act the part: dressing more sharply; saying things like, “synergy,” and, “proactive”; bossing other people around; and making themselves seem more manager-like, in general. And what do managers do throughout the day? Well, they shuffle around from meeting to meeting, constantly running late and having way too many emails to read. So what does someone wanting to look like a manager without looking like they’re trying to look like a manager do? Why, they complain-brag (and excuse-brag) in such a way as to mimic mid-level management. In effect, this complain-bragging, tardiness, and unresponsiveness becomes a kind of status currency within the organization with actual power capital being exchanged in games of chicken. Who can show up a few minutes late to meetings or blow off an email here and there? (Think I’m being cynical? Take a look around when your next meeting is starting late, and I’ll wager dollars to donuts that people show up in rough order of tenure/power/seniority, from most peon to most important.)

Rethinking the Goals

So what’s my point with all this? Believe it or not, this isn’t a tone-deaf or bitter diatribe against “pointy haired bosses” or the necessity of office politics (which actually fascinates me as a subject). A lot of managers, executives, and people in general aren’t complain-bragging at all when they assess situations, offer excuses, or make apologies. But intentional or not, managers and corporate power brokers over the course of decades have created a culture in which complain-bragging about busy-ness, multi-tasking, and “firefighting” are as common and culturally expected as inane conversations about the weather around the water cooler. Rather than complaining about this reality, I’m encouraging myself and anyone reading to avoid getting caught up in it quite so automatically.

Why avoid it? Well, because I’d argue that obligatory complain-bragging creates a mild culture of failure. Let’s revisit the bullet points from the previous section and consider them under the harsh light of process optimization:

  1. “Man, it’s so hard to keep up with all of my email” — If you’re getting that many emails, you’re probably failing to delegate and thus failing as a manager.
  2. “Ugh, I’ve had so many meetings today that I haven’t been able to get a thing done.” — You’re not getting anything done and business continues anyway, “so what is it you’d say… ya do here?”
  3. “Sorry I’m late, but my 10:00 ran way over” — You either don’t have control over your meetings or don’t have a clear agenda for them, so your 10:00 was probably a waste of time.
  4. “I have so many code review requests that I won’t get to write code for weeks!” — You’re a bottleneck and costing the company money. Fix this.

You see, the problem with corporate complain-bragging or complain-bragging in general is that it necessarily involves complaining, which means ceding control over a situation. After all, you don’t typically complain about situations for which you called all the shots and got the desired outcome. And sometimes these are situations that you ought to control or situations that you’d at least look better for controlling. So I’ll leave you with the following thought: next time you find yourself about to complain-brag according to the standard corporate script, make it a simple apology, a suggestion for improvement or, perhaps best of all, a silent vow to fix something. Do these, and other people might do your bragging for you.

By

A Metaphor to Help You Suck at Writing Software

“No plan survives contact with the enemy” –Helmuth von Moltke the Elder

Bureaucracy 101

Let’s set the scene for a moment. You’re a workaday developer in a workman kind of shop. A “waterfall” shop. (For back story on why I put quotes around waterfall, see this post). There is a great show of force when it comes to building software. Grand plans are constructed. Requirements gathering happens in a sliding sort of way where there is one document for vague requirements, another document for more specific requirements, a third document for even more specific requirements than that, and repeat for a few more documents. Then, there is the spec, the functional spec, the design spec, and the design document. In fact, there are probably several design documents.

There aren’t just the typical “waterfall” phases of requirements->design->code->test->toss over the wall, but sub-phases and, when the organism grows large enough, sub-sub-phases. There are project managers and business managers and many other kinds of managers. There are things called change requests and those have their own phases and documents. Requirements gathering is different from requirements elaboration. Design sub-phases include high-level, mid-level and low-level. If you carefully follow the process, most likely published somewhere as a mural-sized state machine or possibly a Gantt chart unsurpassed in its perfect hierarchical beauty, you will achieve the BUFD nirvana of having the actual writing of the code require absolutely no brain power. Everything will be so perfectly planned that a trained ape could write your software. That trained ape is you, workaday developer. Brilliant business stakeholder minds are hard at work perfecting the process of planning software in such fine grained detail that you need not trouble yourself with much thinking or problem solving.

Dude, wait a minute. Wat?!? That doesn’t sound desirable at all! You wake up in the middle of the night one night, sit bolt upright and are suddenly fundamentally unsure that this is really the best approach to building a thing with software. Concerned, you approach some kind of senior business project program manager and ask him about the meaning of developer life in your organization. He nods knowingly, understandingly and puts one arm on your shoulders, extending the other out in broad, professorial arc to help you share his vision. “You see my friend,” he says, “writing software is like building a skyscraper…” And the ‘wisdom’ starts to flow. Well, something starts to flow, at any rate.

Let’s Build a Software Skyscraper

Like a skyscraper, you can’t just start building software without planning and a lot of upfront legwork. A skyscraper can’t simply be assembled by building floors, rooms, walls, etc independently and then slapping them altogether, perhaps interchangeably. Everything is necessarily interdependent and tightly coupled. Just like your software. In the skyscraper, you simply can’t build the 20th floor before the 19th floor is built and you certainly can’t interchange those ‘parts’ much like in your software you can’t have a GUI without a database and you can’t just go swapping persistence models once you have a GUI. In both cases every decision at every point ripples throughout the project and necessarily affects every future decision. Rooms and floors are set in stone in both location and order of construction just as your classes and modules in a software project have to be built in a certain order and can never be swapped out from then on.Jenga

But the similarities don’t end with the fact that both endeavors involve an inseparable web of complete interdependence. It extends to holistic approaches and cost as well. Since software, like a skyscraper, is so lumbering in nature and so permanent once built, the concept of prototyping it is prima facie absurd. Furthermore, in software and skyscrapers, you can’t have a stripped-down but fully functional version to start with — it’s all or nothing, baby. Because of this it’s important to make all decisions up-front and immediately even when you might later have more information that would lead to a better-informed decision. There’s no deferral of decisions that can be made — you need to lock your architecture up right from the get-go and live with the consequences forever, whatever and however horrible they might turn out to be.

And once your software is constructed, your customers better be happy with it because boy-oh-boy is it expensive, cumbersome and painful to change anything about it. Like replacing the fortieth floor on a skyscraper, refactoring your software requires months of business stoppage and a Herculean effort to get the new stuff in place. It soars over the budget set forth and slams through and past the target date, showering passerby with falling debris all the while.

To put it succinctly in list form:

  1. There is only one sequence in which to build software and very little opportunity for deviation and working in parallel.
  2. Software is not supposed to be modular or swappable — a place for everything and everything in its place
  3. The concept of prototyping is nonsensical — you get one shot and one shot only.
  4. It is impossible to defer important decisions until more information is available. Pick things like database or markup language early and live with them forever.
  5. Changing anything after construction is exorbitantly expensive and quite possibly dangerous

Or, to condense even further, this metaphor helps you build software that is brittle and utterly cross-coupled beyond repair. This metaphor is the perfect guide for anyone who wants to write crappy software.

Let’s Build an Agile Building

Once you take the building construction metaphor to its logical conclusion, it seems fairly silly (as a lot of metaphors will if you lean too heavily on them in their weak spots). What’s the source of the disconnect here? To clarify a bit, let’s work backward into the building metaphor starting with good software instead of using it to build bad software.

AgileBuildingA year or so ago, I went to a talk given by “Uncle” Bob Martin on software professionalism. If I could find a link to the text of what he said, I would offer it (and please comment if you have one) but lacking that, I’ll paraphrase. Bob invited the audience to consider a proposition where they were contracting to have a house built and maintained with a particular contractor. The way this worked was you would give the contractor $100 and he would build you anything you wanted in a day. So, you could say “I want a two bedroom ranch house with a deck and a hot-tub and 1.5 bathrooms,” plop down your $100 and come back tomorrow to find the house built to your specification. If it turned out that you didn’t like something about it or your needs changed, same deal applied. Want another wing? Want to turn the half bath into a full bath? Want a patio instead of a deck? Make your checklist, call the contractor, give him $100 and the next day your wish would be your house.

From there, Bob invited audience members to weigh two different approaches to house-planning: try-it-and-see versus waterfall’s “big design up front.” In this world, would you hire expert architects to form plans and carpenters to flesh them out? Would you spend weeks or months in a “planning phase”? Or would you plop down $100 and say, “well, screw it — I’ll just try it and change it if I don’t like it?” This was a rather dramatic moment in the talk as the listener realized just before Bob brought it home that given a choice between agile, “try it and see” and waterfall “design everything up front” nobody sane would choose the latter. The “waterfall” approach to houses (and skyscrapers) is used because a better approach isn’t possible and not because it’s a good approach when there are alternatives.

Wither the Software-Construction Canard?

Given the push toward Agile software development in recent years and the questionable parallels of the metaphor in the first place, why does it persist? There is no shortage of people who think this metaphor is absurd, or at least misguided:

  1. Jason Haley, “It’s not like Building a House”
  2. Terence Parr, “Why writing software is not like engineereing”
  3. James Shore, “That Damned Construction Analogy”
  4. A whole series of people on stackoverlow
  5. Nathaniel T. Schutta, Why Software Development IS Like Building a House (Don’t let the title fool you – give this one a detailed read)
  6. Thomas Guest, “Why Software Development isn’t Like Construction”

If you google things like “software construction analogy” you will find literally dozens of posts like these.

So why the persistence? Well, if you read the last article, by Thomas Guest, you’ll notice a reference to Steve McConnell’s iconic book “Code Complete.” This book has an early chapter that explores a variety of metaphors for software development and offers this one up. In my first daedtech post I endorsed the metaphor but thought we could do better. I stand by that endorsement not because it’s a good metaphor for how software should be developed but because it’s a good metaphor for how it is developed. As in our hypothetical shop from the first section of the post, many places do use this approach to write (often bad) software. But the presence of the metaphor in McConnell’s book and for years and years before that highlights one of the main reasons for persistence: interia. It’s been around a long time.

But I think there’s another, more subtle reason it sticks around. Hard as it was to find pro posts about the software-construction pairing, the ones I did find share an interesting trait. Take a look at this post, for instance. As “PikeWake” gets down to explaining the metaphor, the first thing that he does is talk about project managers and architects (well, the first thing is the software itself, but right after that come the movers and shakers). Somewhere below that the low-skill grunts who actually write the software get a nod as well. Think about that for a moment. In this analogy, the most important people to the software process are the ones with corner offices, direct reports and spreadsheets, and the people who actually write the software are fungible drones paid to perform repetitive action, rather than work. Is it any wonder that ‘supervisors’ and other vestiges of the pre-Agile, command and control era love this metaphor? It might not make for good software, but it sure makes for good justification of roles. It’s comfortable in a world where companies like github are canning the traditional, hierarchical model, valuing the producers over the supervisors, and succeeding.

Perhaps that’s a bit cynical, but I definitely think there’s more than a little truth there. If you stripped out all of the word documents, Gantt charts, status meetings and other typical corporate overhead and embraced a world where developers could self-organize, prioritize and adapt, what would people with a lot of tenure but not a lot of desire or skill at programming do? If there were no actual need for supervision, what would happen? These can be unsettling, game changing questions, so it’s easier to cast developers as low-skill drones that would be adrift without clever supervisors planning everything for them than to dispense with the illusion and realize that developers are highly skilled, generally very intelligent knowledge workers quite capable of optimizing processes in which they participate.

In the end, it’s simple. If you want comfort food for the mid-level management set and mediocrity, then invite someone in to sweep his arm professorially and spin a feel-good tale about how building software is like building houses and managing people is like a father nurturing his children. If you want to be effective, leave the construction metaphor in the 1980s where it belongs.

By

Discoverability Instead of Training and Manuals

Documentation and Training as Failures

Some time back, I was listening to someone explain the finer points of various code that he had written when he lamented the lack of documentation and training available for prospective users of this code. I thought to myself rather blithely and flippantly, “why – just write the code so that documenting it and training people to use it aren’t necessary.” I attributed this to being in a peevish mood or something, but reflecting on this later, I thought earnestly, “snarky Erik is actually right about this.”

Think about the way software development generally goes, especially if you’re developing code to server as a framework or utility for teammates and other developers. You start off with clean code and good intentions and you hammer away at making some functional software. Often things go well, but here and there you hit snags and you do a bit of duct-taping and work-around-ing (working around?), vowing to return later to straighten things out. Sometimes you do just that, but other times you realize that time and budget are finite resources for the effort and you reconcile with shipping something that’s not quite perfect.

But you don’t just ship something imperfect, because you’re diligent and responsible. What do you do instead? You go into those nasty areas of the code and you write inline comments, possibly containing apologies. You make sure that the XML/Java doc comments above the methods/classes are quite thorough as well and, for good measure, you probably even writeup some kind of manual or Word document, perhaps with a Visio diagram. Where the code is clear, you let it speak for itself and where it’s less than clear, you document.

We could put this another, perhaps more blunt way: “we generally try to write clean code and we document when we fail to do so.” We might reasonably think of documentation as something that we do when our work and intentions fail to speak for themselves. This seems a bit iconoclast in the face of conventional methods of communicating and processing information. I grew up as a programmer reading through the “man pages” to understand all manner of *Nix command line utilities, system calls, etc. I learned the nitty-gritty of how concepts like semaphores, and IPC and threading worked in this fashion so it seems a bit blasphemous, even to me, to accuse the authors of these APIs at failing to be clear or, really, failing in any way.

And yet, here we are. To be clear, I don’t think that writing code for which clients need to read manuals is a failure of design or of correctness or of a project or utility on the whole. But I do think it’s a failure to write self documenting code. And I think that for decades, we’ve had a culture in which this wasn’t viewed as a failure of any kind. What are we chided to do when we get a new appliance or gadget? Well, read the manual. There’s even an iconic acronym of exasperation for people who don’t do so prior to asking questions: RTFM. In the interest of keeping the blog’s PG rating, I won’t say here what it stands for. In this culture, the engineering particulars and internal mechanisms of things have been viewed as unknowable mysteries and the means by which communication is offered and understanding reached is large and often formidable manuals with dozens of pages of appendices, notes, and works cited. But is that really the best way to do things in all cases? Aren’t there times where it might be a lot better to make something that screamed how it should be used instead of wasting precious time?

Lifejacket_Instructions

Image courtesy of “AlMare” via Wikimedia Commons

A Changing Culture

An interesting thing has happened in recent years, spurred on largely by Apple, initially, and now I’d say by the mobile computing movement in general, since Google and Microsoft have followed suit in their designs. Apple made it cool to toss the manual and assume that it is the responsibility of the maker of the thing, rather than the user, to ensure that understanding is reached. In the development world, champions of clean, self-documenting code have existed prior to whatever Apple might have been doing in the popular market, but the concept certainly got a large, public boost from Apple and its marketing cachet and those who subsequently got on board with the movement.

Look at the current state of applications being written. This fall, I had the privilege of attending That Conference, Dotnet Rocks Edition and seeing Lwin Maung speak about mobile concepts and the then soon-to-be-released Windows 8 and its app ecosystem. One of the themes of the talk was how apps informed you of how to use them in intuitive ways. You didn’t read a manual to know that the news app had additional content — it told you by leaving the next story link halfway off the side of the screen, practically begging you to paw at it and scroll to the side. The idea of Windows with lots of headers at the top from which you can drill hierarchically into the application is gone and being replaced instead by visual cues that are borderline impossible to screw up.

As this becomes popular in terms of user experience, I submit that it should also become popular with software development. If you find yourself writing some method with the signature DoStuff(bool, bool, int, bool, string, bool) you’ll probably (hopefully) think “man, I better document this because no one will ever figure it out.” But I ask you to take it a step further. If you have the time to document it, then why not spend that time fixing it instead of explaining yourself through documentation? Rename DoStuff to describe exactly what stuff it does, make the parameters significantly fewer, get rid of the Booleans, and make it something that’s pretty much impossible to misunderstand, like string.RemoveCharactersFromTheEnd(6). I bet you don’t need multiple appendices or even a manual to figure out what that does.

Please note that I’m not suggesting that we toss out all previous ways of doing things or stop documenting altogether. Documentation certainly has a time and a place and not all products or APIs are ones that lend themselves to being completely discoverable. What I am suggesting is that we change our culture as developers from “RTFM!!!!” to “could I have made that clearer?” We’ve come a long way as the discipline of programming matures and we have more and more stakeholders who are less and less technical depending on us for more and more things. Communication is increasingly important and communication on clear, broadly understandable terms at that. You’re no longer writing methods being consumed by a handful of fellow geeks that are using your code to put together a BBS about how to program in COBOL. You’re no longer writing code where each byte of memory and disk space is precious so it’s far better to be verbose in voluminous manuals than method or variable names. You’re (for the most part) no longer writing code where optimizing a few cycles trumps readability. You’re writing code in a time when terms like “agile” and “maintainable” reign supreme, there’s no real cost to self-describing code, and the broader popular in general expect their technology to be discoverable. It’s a great time to be a developer — embrace it.

By

The Hard Switch from Walking to Driving

Have you ever listened to someone describe a process that they follow at work and thought “that’s completely insane!”? Maybe part of their build process involves manually editing sixty different files. Maybe their computer crashes every twenty minutes, so they only ever do anything for about fifteen minutes at a time. Or worse, maybe they use Rational Clear Case. A common element in situations where there’s an expression of disbelief when comparing modus operandi is that the person who calmly describes the absurdity is usually in boiled frog kind of situation. Often, they respond with, “yeah, I guess that isn’t normal.”

But just as often, a curious phenomenon ensues from there. The disbelieving, non-boiled person says, “well, you can easily fix that by better build/new computer/anything but Clear Case,” to which the boiled frog replies, “yeah… that’d be nice,” as if the two were fantasizing about winning the lottery and retiring to Costa Rica. In other words, the boiled frog is unable to conceive of a world where things aren’t nuts, except as a remote fantasy.

I believe there is a relatively simple reason for this apparent breaking of the spirit. Specifically, the bad situation causes them to think all alternative situations within practical reach are equally bad. Have you ever noticed the way during economic downturns people predict gloom lasting decades, and during economic boom cycles pundits write about how we’ve moved beyond–nay transcended–bad economic times? It’s the same kind of cognitive bias–assuming that what you’re witnessing must be the norm.

Model_T_tractorBut the phenomenon runs deeper than simply assuming that one’s situation must be normal. It causes the people subject to a bad paradigm to assume that other paradigms share the bad one’s problems. To illustrate, imagine someone with a twelve mile commute to work. Assuming an average walking speed of three miles per hour, imagine that this person spends each day walking four hours to work and four hours home from work. When he explains his daily routine to you and you’ve had a moment to bug out your eyes and stammer for a second, you ask him why on earth he doesn’t drive or take a bus or…or something!

He ruefully replies that he already spends eight hours per day getting to and from work, so he’s not going to add learning how to operate a car or looking up a bus schedule to his already-busy life. Besides, if eight hours of winter walking are cold, just imagine how cold he’ll be if he spends those eight hours sitting still in a car. No, better just to go with what works now.

Absurd as it may seem, I’ve seen rationale like this from other developers, groups, etc. when it comes to tooling and processes. A proposed switch or improvement is rejected because of a fundamental failure to understand the problem being solved. The lesson to take away from this is to step outside of your cognitive biases as frequently as possible by remaining open to the idea of not just tweaks, but game changers. Allow yourself to understand and imagine completely different ways of doing things so that you’re not stuck walking in an age of motorized transport. And if you’re trying to sell a walking commuter on a new technology, remember that it might require a little of bit of extra prodding, nudging, and explaining to break the trance caused by the natural cognitive bias. Whether breaking through your own or someone else’s, it’s worth it.

By

Walking the Line between Fanboy and Luddite

Are Unit Tests Overused?

I was reading this blog post by Andrew Hunter recently where he poses the question as to whether or not unit tests are overused. The core of his argument is a rather nuanced one that seems to hinge on two main points of wariness:

  1. Developers may view unit tests as a golden hammer and ignore other important forms of verification such as integration tests
  2. Unit tests that are too finely grained cause maintenance problems by breaking encapsulation.

I think the former is a great point, but am not really sold on the latter since I’d argue that the thing to do be to create a design where you didn’t have this problem (i.e. the problem isn’t with the amount of tests written, but with the design of the system).  But I don’t have any serious qualms with the point of the article, and I do like reading things like this from time to time as an ardent unit test proponent because the second you stop questioning the merit of your own practices is the same second you become the “because I’ve always done it that way, so get off my lawn” guy. Once I finished reading the article and scrolled down to the comments, however, I noticed an interesting trend.

From the Peanut Gallery

There were a total of 11 comments on the article. One of them was a complete non sequitur (probably SPAM), so let’s shave the ranks down to 10 comments, which makes the math easier anyway for percents. Four readers agreed with the author’s nuanced “we’re overdoing it” take, four seemed to take issue in favor of unit testing and two were essentially rants against unit testing that seemed to mistake the author’s questioning of the extent to which unit tests should be taken for sympathy with the position that they aren’t necessary. And thus in the percentage of these blog reading developers (which many would argue are the most informed developers) 40% defend TDD/Unit testing wholesale, 40% are generally amenable to it but think that some take it too far, and 20% are (angrily) opposed to the practice.

Of the 20% demographic, the percentage of them that have experience writing unit tests is 0, taking them at face value. One says “In over 10 years of development I can count the number of unit tests I’ve written on one hand,” and another says “I event[sic] tried to learn how TDD could help me.” Of the others, it appears that 100% of them have experience writing unit tests, though with some it is not as clear as others. None of them cops to having zero experience the way the detractors do. So among people with experience writing unit tests, there is a 50/50 split as to whether TDD practitioners have it right or whether they should write fewer unit tests and more integration tests. Among people with no experience writing unit tests, there is a 100/0 split as to whether or not writing unit tests is stupid.

That’s all just by the numbers, but if you actually delve into the ‘logic’ fueling the anti-testing posts, it’s garden variety fallacy. One argument takes the form of a simple false syllogism, “Guy X did TDD and guy X sucked, therefore TDD sucks.” The other takes the form of argument from ignorance, “I tried to learn TDD and didn’t like it/failed at it, ergo there is no benefit to it.” (I say failed because in spite of its brevity the post contains a fundamental misunderstanding of the ‘rules’ of TDD and so there was either a failure to understand or the argument is also a strawman).

To play dime store psychologist for a moment, it seems pretty likely to me that statements/opinions like these are attempts to rationalize one’s choices and that the posters protest too much, methinks — angry/exasperated disparaging of automated testing by those who have never tried it is likely hiding a bit of internal insecurity that the overwhelming consensus in the developer world is right and they are wrong. After all, this blog post and the subsequent comments are sort of like a group of chefs debating whether something would taste better with two teaspoons of sugar or three when along comes an admitted non-chef who says “I hate sugar because this guy I don’t like puts it in his coffee and I’ve never eaten it anyway, so it can’t be any good.” The chefs probably look at him with a suffering, bemused expression and give out with an “Aaannnnywho…”

Skeptic or Luddite, Informed or Fanboy?

Another way that one might describe this situation and pass judgment on both the development industry as a whole and these commenters too is to label them the developer equivalent of Luddites. After all, the debate as to the merits of unit testing is widely considered over and most of those who don’t do it say things like “I’d like to start” or they make up weird excuses. Unit testing is The Way Forward (caps intentional) and commenters like these are standing around, wishing for a simpler time where their approach was valued and less was asked of them.

But is that really fair? I mean, if you read my posts, my opinion on the merits of TDD and automated verification in general is no secret, but there’s an interesting issue at play here. Are people with attitudes like this really Luddites, or are they just conservative/skeptic types providing a counterpoint to the early-adopter types only ever interested in the new hotness of the week. You know the types that I mean – the ones who think TDD was so last year and BDD was so last month, and now it’s really all about YDD, which is so new and hot that nobody, including those doing it, knows what the Y stands for yet.

So in drawing a contrast between two roughly described archetypes, how are we to distinguish between “this might be a blip or passing fad” and “this seems to be a new way of doing things that has real value?” Walk too far on one side of the line and you risk being left behind or not taken seriously (and subsequently leaving angry justifications in blog comments) and walk too far on the other side of the line and you’ll engage in counterproductive thrashing and tilting at windmills in your approach, becoming a jack of all new trades while mastering none. Here are some things that I do personally in my attempt to walk this fine line, and would offer as advice to you if you’re interested:

  1. Pick out some successful developers/bloggers/authors/experts that you admire/respect and scan their opinions on things, when offered. It may seem a little “follow the herd,” but I view it as like asking friends for movie recommendations without going to see every movie that comes out.
  2. Don’t adopt any new practice/approach/etc unless you can articulate a problem that it solves for you.
  3. Limit your adoption bandwidth: if you’ve decided to adopt a new persistence model such as a NoSQL alternative to RDBMS, you might not also want to do it a new language (assuming that you’re doing this on someone else’s dime, anyway)
  4. Let others kick the tires a bit and then try it out. This gives you a bit of a bet hedge on whether something will fizzle and die before you commit to it.
  5. If you decide not to adopt something that seems new and hot, keep your finger on the pulse of it and read about its progress and the pains and joys of its adopters. It is possible not to adopt something without avoiding (or refusing to learn about) it.
  6. If you don’t see the value in something or you simply don’t have time/interest in it, don’t presume to think it has no value.
  7. Go to shows/conferences/talks/etc to see what all the fuss is about. This is a low-impact/low-commitment way to see what the fuss is about.

If you have suggestions for how to approach this balancing act, I’d be interested to hear them as well. Because while I think there’s definitely some margin for error, it’s important, if not easy, to avoid being either the person whose life is a never-ending string of partially completed projects and the person who has had the same year of COBOL programming experience for each of the past 30 years.