DaedTech

Stories about Software

By

The Council of Elders Anti-Pattern

Convene the High Council

Today I’d like to talk about something that I’ve observed on some occasions throughout my career. It sort of builds on the concepts that I’ve touched on in previous writings “How to Keep your Best Programmers” and the Expert Beginner posts, including Bruce Webster’s “Dead Sea Effect.” The phenomenon is an anti-pattern that is unique to shops with fairly mature Dead Sea/Expert Beginner presences, and I’ll call it “Council of Elders.”

To understand what I mean by this, let’s consider what happens in an organization with a more vital software group full of engaged, competent, and improving engineers and not bogged down in the politics and mediocrity of seniority-based advancement. In such a group, natural divisions of labor emerge based on relative expertise. For example, you might have an effective team with the “database guy” and the “architecture gal” and “markup guru” and the “domain knowledge expert” or some such thing. Each resident expert handles decision making within his or her sphere of influence and trusts other team members to likewise do the right thing in their own areas of strength. There might be occasional debate about things that blur the lines or occur on the boundary between two experts, but that’s just part of a healthy group dynamic.

And truly, this is a healthy group. There is no dead weight, and all of the group members have autonomy within a subset of the work that is done, meaning that people are happy. And that happiness breeds mutual trust and productivity. I’m oversimplifying and prettying up the counterexample a bit, but it really is something of a virtuous cycle with team members happily playing to their own strengths and areas of interest.

TheCouncilBut what happens in particularly salty Dead Sea shops where authority comes neither from merit nor from expertise, but rather from tenure with the company? What happens when the roost is ruled by Expert Beginners? Well, for one thing, the lords and ladies here tend to guard their fiefdoms more jealously since expertise on the part of others is more threat than benefit. Perhaps more importantly and broadly, however, knowledge and expertise are devalued in favor of personal politics and influence with debates being won on the basis of “who are you and how loud/charming/angry/glib/etc. are you” rather than on the merit of ideas. The currency of Dead Sea departments is everything but ideas–in benevolent ones, it may be “how long have you been here” or “how nice a person are you,” and, in “high pressure culture” ones, it might simply be psychopathy or other cutthroat forms of “might makes right.” And with evaluation of ideas out the window, every council member is freed to hold forth as an expert on every topic, regardless of how much or little he knows about that topic. Nobody is going to dispute anything he says–non-members are cowed into submission and fellow members recognize the importance of putting on a unified public front since they want to be be able to do the same without being questioned.

If you put this political yeast in the oven and let it rise, some of the results are fairly predictable: idea stagnation, increasingly bone-headed solutions to problems, difficulty keeping talent, etc. But an interesting consequence isn’t necessarily intuitive–namely that you’ll wind up with a kind of cabal of long-tenured people that collectively make all decisions, however big or small. I call this the “Council of Elders,” and it’s like one of those Magic Eye paintings that’s hidden but couldn’t be more obvious once you see it.

The Council of Elders is sort of like the Supreme Court of the department, and it’s actually surprisingly democratic as opposed to the more expected ladder system which ranks people by years and months with the company (or if you’re a fan of The Simpsons, the system in the Stonecutters episode where all members of the club are assigned a numeric rank in the order of joining, which determines their authority). The reason that it’s democratic is that actually assigning rank based on years/months of tenure would unceremoniously poke a hole in any illusion of meritocracy. So the council generally makes entrance to the club a matter of tenure, but status within the club a shifting matter of alliances and status games once past the velvet rope.

The Council is fundamentally incapable of delegation or prioritization of decision making. Since entrance is simply a matter of “paying your dues” (i.e. waiting your turn) and largely earned by Expert Beginners, it’s really impossible to divide up decision making based on expertise. The members tend to be very good at (or at least used to) company politics and procedures but not much else. They mostly have the same ‘skill’ set. The lack of prioritization comes from the main currency in the group being status. If a decision, however small, is made by someone not on the Council, it threatens to undermine the very Council itself, so a policy of prevention is adopted and any attempts at circumvention are met with swift and terrible justice (in the form of whatever corporate demerits are in place).

Recognizing Your Elders

What does this council look like in the wild, and how do you know if one is pulling your strings? Here’s a set of symptoms that you’re being governed by a Council of Elders:

  • In any meeting convened to make a decision, the same group of people with minor variants is always present.
  • Members of the software group are frequently given vague instructions to “talk to so and so before you do anything because his cousin’s dog’s trainer once did something like that or something.”
  • There is a roster, emailed out or tacked up, of people who can “clear” you to do something (e.g. they give code reviews, approve time spent, etc.)
  • Sparks fly when something wasn’t “cleared” with someone, even when a few others “approved” it and it has apparently nothing to do with him. “Someone should have told me.” More than one or two people like this and you have a Council.
  • People in the group often feel caught between a rock and a hard place when involved in the political posturing of the Council (e.g. Yoda tells the intern Padiwan to implement the new hours tracking app using SQL Server, but then Mace Windu screams at him that this is a MySQL shop–the Council settles this matter later with no apologies)
  • There is a junta of team members that seem to do nothing but shuffle from meeting to meeting all day like mid-level managers.
  • There are regular meetings viewed by newbies and junior developers as rites of passage.

These are just easy ways to detect Councils in their natural habitat: “where there’s smoke, there’s fire” situations. But lest you think that I’m trying to paint every software department with senior developers that go to meetings with this brush, here is a list of specific criteria — “minimum Council Requirements,” if you will:

  • Lack of clearly defined roles on the group/team or else lack of clear definition for the assigned roles (e.g. there is no ‘architect’ or if there is, no one really knows what that means).
  • A “dues paying” culture and promotions, power and influence largely determined by length of tenure.
  • Lack of objective evaluation criteria of performance, skill and decision-making acumen (i.e. answers to all questions are considered subjective and matters of opinion).
  • Proposed changes from above are met with claims of technical infeasibility and proposed changes from juniors/newbies or other groups are met with vague refusals or simply aren’t acknowledged at all.
  • Actions of any significance are guarded with gatekeeper policies (official code reviews, document approval, sign-off on phases, etc).
  • Line manager tacitly approves of, endorses or is the product of the Council.
  • A moderate to large degree of institutional bureaucracy is in place.
  • New technologies, techniques, approaches, etc are met systematically with culturally entrenched derision and skepticism.

Not a Harmless Curiosity

At this point, you might think to yourself, “So what? It might not be ideal, but rewarding people with a little gratifying power in exchange for company loyalty is common and mostly harmless.” Or perhaps you’re thinking that I’m overly cynical and that the Council generally has good advice to dispense–wisdom won in the trenches. Of course all situations are different, but I would argue that a Council of Elders has a relatively noticeable and negative effect on morale, productivity, and general functioning of a group.

  • The Council is a legitimate bottleneck in its micromanagement of other team members, severely hampering their productivity even with the best assumption of its judiciousness
  • A group of people that spends all its time debating one another over how to rule on every matter doesn’t actually get much work done. The SCOTUS, for example, hasn’t represented a lot of clients lately, but that’s because their job is to be a ruling council–yours probably has titles like, “Senior Software Engineer.”
  • People lacking expertise but put in positions of authority tend to overcompensate by forming strong opinions and sticking to them stubbornly. A room full of people meeting this criterion is going to resemble the US House of Representatives with its gridlock more than a software team.
  • Major problems are likely to languish without solutions because the committee doesn’t do much prioritizing and is likely to be sidetracked for a few days by the contentious issue of what color to make the X for closing the window.
  • Important decisions are made based on interpersonal dynamics among the council rather than merit. Jones might have the best idea, but Jones shorted the check at lunch that day, so the other Council Members freeze him out.
  • Councils make it hard to give meaningful roles or titles to anyone and thus give rise to preposterous situations in which departments have eight ‘architects’ and five developers, or where a project manager decides what database to use while a DBA writes requirements. If everyone on the Council has authority and is an expert at everything, project roles are meaningless anyway.
  • Even under the best circumstances, democratic voting on software design trends toward mediocrity: see Design by Committee Anti-Pattern
  • People toiling away under the rule of a Council, if they don’t leave, will tend to wind up checked out and indifferent. Either that, or they’ll be assimilated into the Council culture.
  • The Council is a natural preservation agent of the Dead Sea problem. Being micromanaged by a team of people whose best qualification is being around for a while and negotiating power will separate the wheat from the chaff, but not in the good way.
  • The only thing that unites the Council is outside threats from above or below. If managers or newer members want change, the Council will lock together like a Praetorian Legion to preserve the status quo, however dysfunctional that may be.
  • Suggestions for improvement are construed with near universality as threats.

If you’re in a position to do so, I’d suggest breaking up the cartel. Figure out what people are good at and give them meaningful roles. Keep responsibilities divided, and not only will autonomy become more egalitarian, but also people tenured and new alike will develop complementary skills and areas of expertise. With people working together rather than forming shifting alliances, you’ll find prioritizing tasks, dividing up work, and getting things done becomes easier. The Council’s reptile brain will take over and it will fight you every step of the way when you try to disband it, but once you do, it will be good for everyone.

By

TDD: Simplest is not Stupidest

Where the Message Gets Lost In Teaching TDD

I recently answered a programmers’ stackexchange post about test-driven development. (As an aside, it will be cool when me linking to a SE question drives more traffic their way than them linking to me drives my way 🙂 ). As I’m wont to do, I said a lot in the answer there, but I’d like to expand a facet of my answer into a blog post that hopefully clarifies an aspect of Test-Driven Development (TDD) for people–at least, for people who see the practice the way that I do.

One of the mainstays of showcasing test-driven development is to show some extremely bone-headed ways to get tests to pass. I do this myself when I’m showing people how to follow TDD, and the reason is to drive home the point “do the simplest thing.” For instance, I was recently putting together such a demo and started out with the following code:

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void IsEven_Returns_False_For_1()
{
    var inspector = new NumberInspector();
 
    Assert.IsFalse(inspector.IsEven(1));
}

public class NumberInspector
{
    public bool IsEven(int target)
    {
        return false;
    }
}

This is how the code looked after going from the “red” to the “green” portion of the cycle. When I used CodeRush to define the IsEven method, it defaulted to throwing NotImplementedException, which constituted a failure. To make it pass, I just changed that to “return false.”

The reason that this is such a common way to explain TDD is that the practice is generally being introduced to people who are used to approaching problems monolithically, as described in this post I wrote a while back. For people used to solving problems this way, the question isn’t, “how do I get the right value for one,” but rather, “how do I solve it for all integers and how do I ensure that it runs in constant time and is the modulo operator as efficient as bit shifting and what do I do if the user wants to do it for decimal types should I truncate or round or throw an exception and whoah, I’m freaking out man!” There’s a tendency, often fired in the forge of undergrad CS programs, to believe that the entire algorithm has to be conceived, envisioned, and drawn up in its entirety before the first character of code is written.

So TDD is taught the way it is to provide contrast. I show people an example like this to say, “forget all that other stuff–all you need to do is get this one test passing for this one input and just assume that this will be the only input ever, go, now!” TDD is supposed to be fast, and it’s supposed to help you solve just one problem at a time. The fact that returning false won’t work for two isn’t your problem–it’s the problem of you forty-five seconds from now, so there’s no reason for you to bother with it. Live a little–procrastinate!

You refine your algorithm only as the inputs mandate it, and you pick your inputs so as to get the code doing what you want. For instance, after putting in the “return false” and getting the first test passing, it’s pretty apparent that this won’t work for the input “2”. So now you’ve got your next problem–you write the test for 2 and then you set about getting it to pass, say with “return target == 2”. That’s still not great. But it’s better, it was fast, and now your code solves two cases instead of just the one.

Running off the Rails

But there is a tendency I think, as demonstrated by Kristof’s question, for TDD teachers to give the wrong impression. If you were to write a test for 3, “return target == 2” would pass and you might move on to 4. What do you do at 4? How about “return target == 2 || target == 4;”

So far we’ve been moving in a good direction, but if you take off your “simplest thing” hat for a moment and think about basic programming and math, you can see that we’re starting down a pretty ominous path. After throwing in a 6 and an 8 to the or clause, you might simply decide to use a loop to iterate through all even numbers up to int.MaxValue, or-ing a return value with itself to see if target is any of them.

public bool IsEven(int target)
{
    bool isEven = false;
    for (int index = 0; index < int.MaxValue - 1; index += 2)
        isEven |= target == index;
    return isEven;
}

Yikes! What went wrong? How did we wind up doing something so obtuse following the red-green-refactor principles of TDD? Two considerations, one reason: "simplest" isn't "stupidest."

Simplicity Reconsidered

The first consideration is that simple-complex is not measured on the same scale as stupid-clever. The two have a case-by-case, often interconnected relationship, but simple and stupid aren't the same just as complex and clever aren't the same. So the fact that something is the first thing you think of or the most brainless thing that you think of doesn't mean that it's the simplest thing you could think of. What's the simplest way to get an empty boolean method to return false? "return false;" has no branches and one hardcoded piece of logic. What's the simplest way that you could get a boolean method to return false for 1 and true for 2? "return target == 2" accomplishes the task with a single conditional of incredibly simple math. How about false for 1 and true for 2 and 4? "return target % 2 == 0" accomplishes the task with a single conditional of slightly more involved math. "return target == 2 || target == 4" accomplishes the task with a single conditional containing two clauses (could also be two conditionals). Modulo arithmetic is more elegant/sophisticated, but it is also simpler.

Now, I fully understand the importance in TDD of proceeding methodically and solving problems in cadence. If you can't think of the modulo solution, it's perfectly valid to use the or condition and put in another data point such as testing for IsEven(6). Or perhaps you get all tests passing with the more obtuse solution and then spend the refactor phase refining the algorithm. Certainly nothing wrong with either approach, but at some point you have to make the jump from obtuse to simple, and the real "aha!" moment with TDD comes when you start to recognize the fundamental difference between the two, which is what I'll call the second consideration.

The second consideration is that "simplest" advances an algorithm where "stupidest" does not. To understand what I mean, consider this table:

ConditionalClauseChart

In every case that you add a test, you're adding complexity to the method. This is ultimately not sustainable. You'll never wind up sticking code in production if you need to modify the algorithm every time a new input is sent your way. Well, I shouldn't say never--the Brute Forces are busily cranking these out for you to enjoy on the Daily WTF. But you aren't Brute Force--TDD isn't his style. And because you're not, you need to use either the green or refactor phase to do the simplest possible thing to advance your algorithm.

A great way to do this is to take stock after each cycle, before you write your next failing test and clarify to yourself how you've gotten closer to being done. After the green-refactor, you should be able to note a game-changing refinement. For instance:

MilestoneTddChart

Notice the difference here. In the first two entries, we make real progress. We go from no method to a method and then from a method with one hard-coded value to one that can make the distinction we want for a limited set of values. On the next line, our gains are purely superficial--we grow our limited set from distinguishing between 2 values to 3. That's not good enough, so we can use the refactor cycle to go from our limited set to the complete set.

It might not always be possible to go from limited to complete like that, but you should get somewhere. Maybe you somehow handle all values under 100 or all positive or all negative values. Whatever the case may be, it should cover more ground and be more general. Because really, TDD at its core is a mechanism to help you start with concrete test cases and tease out an algorithm that becomes increasingly generalized.

So please remember that the discipline isn't to do the stupidest or most obtuse thing that works. The discipline is to break a large problem into a series of comparably simple problems that you can solve in sequence without getting ahead of yourself. And this is achieved by simplicity and generalizing--not by brute force.

By

Wherefore Thou Shalt Fail at Software Requirements

On Being Deliberately Unclear

For whatever reason lately, I’ve been drawing a lot of inspiration from Programmers’ Stack Exchange, and today brings another such post. However, unlike other questions, reading this one seriously made me sad. The questions, the answers, its existence… all of it made me sad in the same kind of way that it’s vaguely depressing to think of how many hours of our lives we lose to traffic jams.

I had a conversation with a coworker once that involved a good-natured disagreement over the role of legalese in our lives. My (cynical) contention was that legalese has evolved in purpose–has even self-bastardized over the years–to go from something designed to clarify communication to something that is specifically used to hinder communication. To put it another way, by freezing the language of law in pedantic and flowery period-speak from the 18th century, lawyers have effectively created a throwback to the feudal practice of having a language for the nobles and a different language for the commoners. This language of the lawmaking nobles builds a bit of social job security in a way that writers of inscrutable code have picked up on–if you make it so no one understands your work and yet everyone depends on your work, then they have no choice but to keep you around. (Lawyers also have the benefit of being able to make laws to force you to need lawyers, which never hurts.)

My colleague appropriately pointed out that the nature of jurisprudence is such that it makes sense for lawyers to phrase things in documents in such a way that they can point to decided case law as precedent for what they’re doing. In other words, it’s ironically pragmatic to use language that no one understands because the alternative is that you write it in language less than 200 years old but then have to systematically re-defend every paragraph in various lawsuits later. I conceded this as a fair point, but thought it pointed to a more systemic flaw. It seems to me that any framework for bargaining between two parties that inherently makes communication harder is, ipso facto, something of a failure. He disagreed, being of the opinion that this old language is actually more precise and clear than today’s language. At this point, we agreed to disagree. I’ve since pondered the subject of language rules, language evolution, and precision. (As an aside, this bit of back-story is how the conversation went to the best of my recollection, so the details may be a little rough around the edges. It is not my intention to misrepresent anyone’s positions.)

Requirement Shock and Awe

Because of this pondering, my strange brain immediately linked the SE post with this conversation from some months back. I worked for a company once where the project manager thunderously plopped down some kind of 200-page Word document, in outline form, that contained probably 17,000 eerily and boringly similar statements:

  • 1.17.94.6b The system shall display an icon of color pink at such time as the button of color red button is clicked with the pointing device commonly known as a “mouse.”
  • 1.17.94.6c The system shall display an icon of color pink at such time as the button of color red is clicked with the pointing device commonly known as a “mouse,” 1st planet from Sol, Mercury, spins in retrograde, and the utilizer utilizes the menu option numbering 12.
  • 1.17.94.6d The system shall display an icon of color mauve at such time as the button of color purple is clicked with the pointing device commonly known as a “mouse.”
  • 1.17.94.6e The system shall require utilizers to change their underwear every half hour.
  • 1.17.94.6f The system shall require utilizers to wear underwear on the outside of their pants so that it can check.
  • 1.17.94.6g The system shall guarantee that all children under the age of 16 are now… 16.
  • 1.17.94.6h The system shalt not make graven images.

YeRequirements2A week later, when I asked a clarifying question about some point in a meeting, he furrowed his brow in disappointment and asked if I had read the functional-requirement-elaboration-dohickey-spec-200-pager, to which I replied that I had done my best, but, much like the dictionary, I had failed at complete memorization. I stated furthermore that expecting anyone to gain a good working understanding of the prospective software from this thing was silly since people don’t picture hypothetical software at that level of granularity. And besides, the document will be woefully wrong by the time the actual software is written anyway.

While I was at it with the unpopular opinions, I later told the project manager (who wasn’t the one that wrote this behemoth–just the messenger) that the purpose of this document seemed not to be clarity at all, but a mimicking of legalese. I gathered that the author was interested primarily in creating some kind of “heads I win, tails you lose” situation. This juggernaut of an artifact assured that everyone just kind of winged it with software development, but that when the barrel crashed at the bottom of the Waterfall, the document author could accuse the devs of not fulfilling their end of the bargain. Really, the document was so mind-numbing and confusing as to make understanding it a relative matter of opinion anyway. Having gotten that out in the open, I proceeded to write software for the release that was ahead of schedule and accepted, and I never looked at the intimidating requirements encyclopedia again. I doubt anyone ever did or will if they’re not looking for evidentiary support in some kind of blame game.

Well, time went by, and I filed that bit of silliness under “lol @ waterfall.” But there had been a coworker who was party to this conversation that mentioned something about the RFCs, or at least the notion of common definition, when I had been on my soapbox. That is, “shall” was used because it meant something specific, say, as compared to “will” or “might” or “should.” And because those specific meanings, apparently defined for eternity in the hallowed and dusty annals of the RFC, will live on beyond the English language or even the human race, it is those meanings we shall (should? might? must?) use. I mention these RFCs because the accepted answer to that dreadful Stack Exchange question mentioned them, clearing up once and for all and to all, including Bill Clinton, what the definition of “is” is (at least in the case of future tenses simple, perfect, continuous, and perfect continuous).

The only problem is that the RFCs weren’t the first place that someone had trotted out a bit of self-important window-dressing when it came to simple verb usage. Lawyers had beaten them to the punch:

First, lawyers regularly misuse it to mean something other than “has a duty to.” It has become so corrupted by misuse that it has no firm meaning. Second—and related to the first—it breeds litigation. There are 76 pages in “Words and Phrases” (a legal reference) that summarize hundreds of cases interpreting “shall.” Third, nobody uses “shall” in common speech. It’s one more example of unnecessary lawyer talk. Nobody says, “You shall finish the project in a week.” For all these reasons, “must” is a better choice, and the change has already started to take place. The new Federal Rules of Appellate Procedure, for instance, use “must,” not “shall.”

“Shall” isn’t plain English… But legal drafters use “shall” incessantly. They learn it by osmosis in law school, and the lesson is fortified in law practice.

Ask a drafter what “shall” means, and you’ll hear that it’s a mandatory word—opposed to the permissive “may”. Although this isn’t a lie, it’s a gross inaccuracy… Often, it’s true, “shall” is mandatory… Yet the word frequently bears other meanings—sometimes even masquerading as a synonym of “may”… In just about every jurisdiction, courts have held that “shall” can mean not just “must” and “may”, but also “will” and “is”. Increasingly, official drafting bodies are recognizing the problem… Many… drafters have adopted the “shall-less” style… You should do the same.

Bringing things back full circle, we have lawyers misusing a variant of “to be”–one of the most basic imaginable concepts in the English language–in order to sound more official and to confuse and intimidate readers. We have the world’s middle manager, pointy-haired types wanting to get in on that self-important action and adopting this language to stack the deck in CYA poker. And, mercifully, we have a group defining the new Federal Rules of Appellate Procedure who seems to value clarity over bombastic shows of rhetorical force designed to overpower objections and confuse skeptics. On the whole, we have a whole lot of people using language to impress, confuse, and intimidate, and a stalwart few trying to be precise.

A Failure of Both Clarity and Modernity

But believe it or not, the point of this post isn’t specifically to rag on people for substituting the odd “utilize” for “use,” sports-announcer-style, to try to sound smart. From academia to corporate meetings to dating sites (and courtrooms and requirements documents), that sort of thing is far too widespread for me to take it as a personal crusade. It might make me a hypocrite anyway to try. My point here is a more subtle one than that, and a more subtle one than pointing out the inherent, RFC futility of trying to get the world to agree to some made up definitions of various tense forms of “to be.”

The meat of the issue is that using slightly different variants of common words to emphasize things is a very dated way of defining requirements. Think of this kind of nonsense: “well… shall means that it’s the developer’s responsibility, but shalt means that it’s an act of God, and will means that project management has to help, and optional means optional and blah, blah blah…” What’s really going on here? I’d say what’s happening is some half-baked attempt to define concepts like “priority” and “responsible party” and “dependency.” You know, the kind of things that would do well, in, say, a database as fields. Pointy-haireds come for the semantic quibbling and they stay for the trumped-up buzz speak, so it winds up being harder than it should be to move away from a system where you get to bust out the fine verbal china and say impressive sounding things like “shall.”

The real problem here is that this silliness is a holdover from days when requirements were captured on Word Documents (or typewriters) as long, flowery prose and evaluated as a gestalt. But with modern ALM systems, work items, traceability tools, etc., this is colossal helping of fail. Imagine a requirement phrased as “when a user clicks the submit button, a JSON message is sent to the server.” That string can sit in a database along with a boolean that’s set to true when this accurately describes the world and false when it doesn’t. You want priority? How about an integer instead of bickering over whether it should be “JSON message {shall, will, ought to, might, mayhap, by-Jove-it-had-better, shalt, may, etc.} sent to the server”? When you’re coding, it’s an anti-pattern to parse giant chunks of text in a database field, so why isn’t it considered one to do it during requirements definition?

We have far better tools for capturing and managing requirements than word processors, pointless arguments over misappropriated grammar, standards used to define ARPANET in the 60’s, and polyester bell-bottoms. It’s 2013, so let’s not define and track requirements like it’s 1999… or 1969. Ditch the RFCs, the waterfall, the BS Bingo talk, and the whole nine yards if you want to get things done. But if you’d prefer just to focus on figuring out who to blame when you fail, then you should/shall/will/must/might/shalt not change a thing.

By

Productivity Add-Ins: Bruce Lee vs Batman

In the last few months, I’ve seen a number of tweets and posts decrying or at least cautioning against the use of productivity tools (e.g. CodeRush, ReSharper, and JustCode). The reasoning behind this is is generally some variant of the notion that such tools are more akin to addictive drugs than to sustainable life improvements. Sure, the productivity tool is initially a big help, but soon you’re useless without it. If, on the other hand, you had just stuck to the simple, honest, clean living of regular development, you might not have reached the dizzying highs of the drug, but neither would you have experienced crippling dependency and eventual rock bottom. The response from those who disagree is also common: insistence that use doesn’t equal dependence and the extolling of the virtues of such tools. Predictably, this back and forth usually degenerates into Apple v. Android or Coke v. Pepsi.

Before this degeneration, though, the debate has some fascinating overtones. I’d like to explore those overtones a bit to see if there’s some kind of grudging consensus to be reached on the subject (though I recognize that this is probably futile, given the intense cognitive bias of endowment effect on display when it comes to add-ins and enhancements that developers use). At the end of the day, I think both outlooks are born out of legitimate experience and motivation and offer lessons that can at least lead to deciding how much to automate your process with eyes wide open.

Also, in the interests of full disclosure, I am an enthusiastic CodeRush fan, so my own personal preference falls heavily on the side of productivity tools. And I have in the past plugged for it too, though mainly for the issue static analysis purpose rather than any code generation. That being said, I don’t actually care whether people use these tools or not, nor do I take any personal affront to someone reading that linked post, deciding that I’m full of crap, and continuing to develop sans add-ins.

The Case Against the Tools

There’s an interesting phenomenon that I’ve encountered a number of times in a variety of incarnations. In shops where there is development of one core product for protracted periods of time, you meet workaday devs who have not actually clicked “File->New” (or whatever) and created a new project in months or years. Every morning they come in at 9, every evening they punch out at 5, and they know how to write code, compile it, and run the application, all with plenty of support from a heavyweight IDE like Eclipse or Visual Studio. The phenomenon that I’ve encountered in such situations is that occasionally something breaks or doesn’t work properly, and I’ve said, “oh, well, just compile it manually from the command line,” or, “just kick off the build outside of the IDE.” This elicits a blank stare that communicates quite effectively, “dude, wat–that’s not how it works.”

When I’ve encountered this, I find that I have to then have an awkward conversation with a non-entry-level developer where I explain the OS command line and basic program execution; the fact that the IDE and the compiler are not, in fact, the same thing; and other things that I would have thought were pretty fundamental. So what has gone wrong here? I’d say that the underlying problem is a classic one in this line of work–automation prior to understanding.

AngryArchLet’s say that I work in a truly waterfall shop, and I get tired of having to manually destroy all code when any requirement changes so that I can start over. Watching for Word document revisions to change and then manually deleting the project is a hassle that I’ve lived with for one day too long, so I fire up the plugin project template for my IDE and write something that runs every time I start. This plugin simply checks the requirements document to see if it has been changed and, if so, it deletes the project I’m working on from source control and automatically creates a new, blank one.

Let’s then say this plugin is so successful that I slap it into everyone’s IDE, including new hires. And, as time goes by, some of those new hires drift to other departments and groups, not all of which are quite as pure as we are in their waterfall approach. It isn’t long before some angry architect somewhere storms over, demanding to know why the new guy took it upon himself to delete the entire source control structure and is flabbergasted to hear, “oh, that wasn’t me–that’s just how the IDE works.”

Another very real issue that something like a productivity tool, used unwisely, can create is to facilitate greatly enhanced efficiency at generating terrible code (see “Romance Author“). A common development anti-pattern (in my opinion) that makes me wince is when I see someone say, “I’m sure generating a lot of repetitive code–I should write some code that generates this code en masse.” (This may be reasonable to do in some cases, but often it’s better to revisit the design.) Productivity tools make this much easier and thus more tempting to do.

The lesson here is that automation can lead to lack of understanding and to real problems when the person benefiting doesn’t understand how the automation works or if and why it’s better. This lack of understanding leads to a narrower view of possible approaches. I think a point of agreement between proponents and opponents of tools might be that it’s better to have felt a pain point before adopting the cure for it rather than just gulping down pain medication ‘preventatively’ and lending credence to those saying the add-ins are negatively habit-forming. You shouldn’t download and use some productivity add-in because all of the cool kids are doing it and you don’t want to be left out of the conversations with hashtag #coderush

The Case for the Tools

The argument from the last section takes at face value the genuine concerns of those making it and lends them benefit of the doubt that their issue with productivity tools is truly concern for bad or voodoo automation. And I think that requires a genuine leap of faith. When it comes to add-ins, I’ve noticed a common thread between opponents of that and opponents of unit tests/TDD–often the most vocal and angry opponents are ones that have never tried it. This being the case, the waters become a little bit muddied since we don’t know from case to case if the opponent has consistently eschewed them because he really believes his arguments against them or if he argues against them to justify retroactively not having learned to use them.

And that’s really not a trivial quibble. I can count plenty of detractors that have never used the tools, but what I can’t recall is a single instance of someone saying, “man, I used CodeRush for years and it really made me worse at my job before I kicked the habit.” I can recall (because I’ve said) that it makes it annoying for me to use less sophisticated environments and tooling, but I’d rather the tide rise and lift all of the boats than advocate that everybody use notepad or VI so that we don’t experience feature envy if we switch to something else.

NorthPolePigeonThe attitude that results from “my avoidance of these tools makes me stronger” is the main thing I was referring to earlier in the post when I mentioned “fascinating overtones.” It sets the stage for tools opponents to project a mix of rugged survivalist and Protestant Work Ethic. Metaphorically speaking, the VI users of the world sleep on a bed of brambles because things like beds and not being stabbed while you sleep are for weaklings. Pain is gain. You get the sense that these guys refuse to eat anything that they didn’t either grow themselves or shoot using a homemade bow and arrow fashioned out of something that they grew themselves.

But when it comes to programming (and, more broadly, life, but I’ll leave that for a post in a philosophy blog that I will never start) this affect is prone to reductio ad absurdum. If you win by leaving productivity tools out of your Visual Studio, doesn’t the guy who uses Notepad++ over Visual Studio trump you since he doesn’t use Intellisense? And doesn’t the person who uses plain old Notepad trump him, since he doesn’t come to rely on such decadent conveniences as syntax highlighting and auto-indentation? And isn’t that guy a noob next to the guy who codes without a monitor the way that really, really smart people in movies play chess without looking at the board? And don’t they all pale in comparison to someone who lives in a hut at the North Pole and sends his hand-written assembly code via carrier pigeon to someone who types it into a computer and executes it (he’s so hardcore that he clearly doesn’t need the feedback of running his code)? I mean, isn’t that necessary if you really want to be a minimalist, 10th degree black-belt, Zen Master programmer–to be so productive that you fold reality back on itself and actually produce nothing?

The lesson here is that pain may be gain when it comes to self-growth and status, but it really isn’t when the pain makes you slower and people are paying for your time. Avoiding shortcuts and efficiency so that you can confidently talk about your self reliance not only fails as a value-add, but it’s inherently doomed to failure since there’s always going to be some guy that can come along and trump you in that “disarms race.” Doing without has no intrinsic value unless you can demonstrate that you’re actively being hampered by a tool.

So What’s the Verdict?

I don’t know that I’ve covered any ground-breaking territory except to point out that both sides of this argument have solutions to real but different problems. The minimalists are solving the problem of specious application of rote procedures and lack of self-reliance while the add-in people are solving the problem of automating tedious or repetitive processes. Ironically, both groups have solutions for problems that are fundamental to the programmer condition (avoiding doing things without knowing why and avoiding doing things that could be done better by machines, respectively). It’s just a question of which problem is being solved when and why it’s being solved, and that’s going to be a matter of discretion.

Add-in people, be careful that you don’t become extremely proficient at coding up anti-patterns and doing things that you don’t understand. Minimalist people, recognize that tools that others use and you don’t aren’t necessarily crutches for them. And, above all, have enough respect for one another to realize that what works for some may not work for others. If someone isn’t interested in productivity tools or add-ins and feels more comfortable with a minimalist setup, who are any of us to judge? I’ve been using CodeRush for years, and I would request the same consideration–please don’t assume that I use it as a template for 5,000 line singletons and other means of mindlessly generating crap.

At the end of the day, whether you choose to fight bad guys using only your fists, your feet, and a pair of cut-off sweat shorts or whether you have some crazy suit with all manner of gizmos and gadgets, the only important consideration when all is said and done is the results. You can certainly leave an unconscious and dazed pile of ne’er-do-wells in your wake either way. Metaphorically speaking, that is–it’s probably actually soda cans and pretzel crumbs.

By

An Interface Taxonomy

I generally subscribe to the notion that a huge (and almost universally underrated) part of being a software developer is coming up with good names for things. The importance of naming concepts ranges from the macro-consideration of what to call languages or frameworks (how about “.NET” and “javascript” as preposterously confusing naming-fails) to the application level (I’m looking at you, GIMP for Linux) all the way down to the micro-level of methods and variables:

//u dnt typ lk this if u wnt ppl 2 undrstnd u, do u?
public string Trnstr(string vl2trn)
{
var t_vrbl_apse = GtTrncStrn(vl2trn);
return t_vrbl_apse;
}

So in the interest of clarifying terms and concepts that we discuss, I’d like to suggest a taxonomy of interfaces. As I’m defining them, these terms are not mutually exclusive so a game of “which kind is this” might yield more than one right answer. Also, it almost goes without saying that this is not comprehensive (the only reason I’m saying it is as a disclaimer 🙂 ). I’m really just trying to get the ball rolling here to establish a working, helpful lexicon. If you know of some kind of already-existing taxonomy like this or simply have suggestions for ones I missed, please weigh in with comments.

Characteristic Interfaces

Characteristic interfaces are interfaces used to express runtime capabilities of a type. Examples in C# include ISerializeable, ICloneable, IEnumerable, etc. The author of the class is pulling in some behaviors that are ancillary features of the type rather than the main course. You might have a Customer object that happened to be cloneable and serializable, but those facts wouldn’t be prominently mentioned when you were explaining to someone the purpose of the object. You’re likely to see these as method parameters and return types since this is a favorable alternative to casting for expressing interest in some set of objects with features in common.

Client Interfaces

Client interfaces are used to guide users of your code as to what functionality they need to supply. This might include an interface called IValidateOrders with a Validate() method taking an order parameter, for instance. Your code understands orders and it understands that they need to be validated, but they leave it up to client implementations to supply that validation logic. With the fall of popular inheritance structures and the rise of composition, this has become a much more common way to interact with arbitrary client code.

Documentation Interfaces

This is a type of interface that exists mainly for write-time documentation rather than any kind of build or run-time considerations. You see this in situations where the actual implementation of the interface results in no runtime polymorphism, unit-testing, or flexibility concerns. The interface is defined simply so that other developers will know what methods need to be written if/when they define an implementation or else to provide readability cues by having a quick list of what the class does to the right of a colon next to its name. An example would be having Car : IMoveAround where the interface simply has one method called “Move()”. Obviously, any documentation interface will lose its status as soon as someone introduces runtime polymorphism (e.g. some kind of class factory or configurability with an IoC container), but you might describe it this way up until that point or, perhaps even after, if it was clearly intended for documentation purposes.

Identify Interfaces

Conceptually, these are the opposite of Characteristic interfaces. These are interfaces that an implementing type uses to define itself. Examples might include implementers of IRepository<T>, IFileParser, or INetworkListener. Implementing the interface pretty much wears out the type’s allowance for functionality under the Single Responsibility Principle. Interestingly enough, it’s perfectly reasonable to see a type implement an Identity Interface and a Characteristic Interface (service classes implementing IDisposable come immediately to mind).

Seam Interfaces

A seam interface is one that divides your application in swappable ways. A good example would be a series of repositories that make use of IDataAccess implementations. The dependency of repositories on schemes for accessing data is inverted, allowing easy testing of the repositories and simple runtime configuration of different data access schemes. As a concrete example, an architecture using a seam interface between repository and data access could switch from using a SQL Server database to a flat file structure or a document database by altering only a few lines of wireup code or XML. Seam interfaces provide points at which applications are easy to test and change.

Speculative Interfaces

Speculative interfaces are sort of the dark side of seam interfaces–they’re unrealized seams. In our previous example, the IDataAccess interface would be speculative if there were only one type of persistence and that was going to be the case for the foreseeable future. The interface is still providing a seam for testing, but now the complexity that it introduces is of questionable value since there aren’t multiple implementations and it could be argued that you’re simply introducing complexity in violation of YAGNI. It’s generally easiest to identify speculative intefaces by the naming scheme of the interfaces and their single implementation: Foo and IFoo, LookupService and ILookupService, etc. (This latter naming example is specific to C#–I don’t know exactly how people commonly name speculative interfaces in other languages or if there is a consistent naming scheme at all, absent the C# specific Hungarian notation for interfaces.)