DaedTech

Stories about Software

By

How Rational Clear Case Stole my Innocence and Nearly Ruined my Life

A Simpler Time

When I was in college in the late 90’s, we didn’t use source control. The concept existed but wasn’t pervasive in my environment. (Or, if it was, I have no recollection of using it). And, why use it? We were working projects lasting less than a semester (usually far less) and in teams of 3 or fewer for the most part. We also generally telnet-ed into servers and stored most of our source there since most students had windows machines and most of our assignments required *NIX, meaning that the “backup/persistence” component of source control was already taken care of for us. In other words, we were young and carefree.

After I left college and started working, the need for source control was explained to me and I was introduced to Visual Source Safe, a product so bad that even the company that made it didn’t use it for source control. Still, it was better than no source control. If I messed things up badly I could always go back to a sane starting point and life would be good. This was no perfect solution, but I began to see the benefits of backed up work and concurrent edit management in a way that I never had in school. I was moving up in the world and making some mistakes, but the good, honest kind that spurred maturity and growth.

As the 2000’s and my development projects went on, I was exposed to CVS and then SVN. This was a golden age of tooling for me. I could merge files and create branches. Rolling back to previous versions of software was possible as was switching to some speculative sandbox. It was easier to lead projects and/or scale them up to more developers. It even became possible to ‘rescue’ and improve old legacy projects by adding this kind of tooling. The source control schemes didn’t just become part of my subconscious — they were a pleasure to use. The sky was the limit and I felt that my potential was boundless. People expected big things from me and I was not letting them down as my career progressed from junior developer to seasoned programmer to leader.

The Gathering Storm

But storm clouds were gathering on the horizon, even if I didn’t realize it yet. In just a matter of a few short years following my promising career start, everything would change. My coding style grew sloppy and haphazard. The experimentation and tinkering that had previously defined me went by the wayside. My view on programming and software in general went from enthusiastic to apathetic to downright nihilistic. So I became lazy and negatively superstitious. I had fallen in with the wrong crowd; I had started using Rational Clear Case.

It started harmlessly enough. I was introduced to the tool on a long term project and I remember thinking “wow, cool – this separates backup from change merging”, and that was the sweet side it showed to sucker me in. But, I didn’t see it that way at the time with my unbridled optimism and sunny outlook on life. The first warning sign should have been how incredibly complicated it was to set up and how difficult it was to use, but I just wanted to fit in and not seem stupid and lame, so I ignored it.

The first thing to suffer was good coding practice. With Rational Clear Case, it isn’t cool to do things like add files to source control or rename existing files. That’s for nerds and squares. With Clear Case, you add classes to existing files and keep filenames long after they make sense. If you don’t, it doesn’t go well for you. So, my class sizes tended to grow and my names tended to rot as the code changed. Correctness, brevity and the single responsibility principle weren’t worth looking dumb in front of Clear Case, and besides, if you cross it, it gets really angry and stops working for hours or even days. Even formatting and boy-scout changes weren’t worth it because of the extreme verbosity in the version tree and spurious merge conflicts that might result. Better to touch as little code as humanly possible.

The next thing to go was my interest in playing with code and experimenting. VPN connection to Clear Case was impossibly slow, so the days of logging in from home at oddball hours to implement a solution that popped into my head were over. Clear Case would also get extremely angry if I tried to sandbox a solution in another directory, using its View.dat file to create all kinds of havoc. But it was fine, I told myself — working after hours and learning through experimentation aren’t things that cool kids do.

And, where I previously thought that the world was a basically good place, filled with tools that work dependably and helpfully, Clear Case soon showed me how wrong I was. It exposed me to a world where checkins randomly failed and even crashed the machine – a world where something called the ALBD License server having a problem could make it so that you didn’t have to (in fact couldn’t) write code for a day or two. My eyes were opened to a world where nothing can be trusted and no one even knows what’s real and what isn’t. I came to question the very purpose of doing work in the first place, since files sometimes just disappear. The only thing that made sense was to do the bare minimum to get by, and maybe not even that. Clear Case never tried to drown me in stupid, idealistic fantasies like source control that works and tools that don’t radically hamper your productivity — it was the only thing in my life that told me the truth.

Or, so I thought.

Redemption

As it turned out, I had strayed — been led astray — from the path of good software development, and my closest friends and family finally staged an intervention with me to show me the kind of programmer that Clear Case was turning me into. I denied and fought against it at first, but realized that they were right. I was on the path to being a Net Negative Producing Programmer (NNPP) or washing out of the industry altogether.

At first I thought that I’d have a gradual break with Clear Case, but I soon realized that would be impossible. I had to cut all ties with it and begin a new life, re-focused on my developer dreams and being a productive member of that community. While it seemed hard at first, I’ve never looked back. And while I’ll never regain my pre-Clear-Case innocence and youthful exuberance, my life is back on track. I am once again productive, optimistic, and happy.

What’s Wrong With You, Erik?

Okay, so that may have been a little After School Special-ish. And, nobody actually had an intervention with me; I actually just worked on projects where I used different source control. And I never actually stopped working at night, factoring my classes, giving good names, etc. So why did I write all of this?

Because Clear Case made me want to stop doing all of those things. It made many, many good practices painful to do while creating a path of least resistance right through a number of terrible practices. It encouraged sloppiness and laziness while discouraging productivity and creativity, and that’s a problem.

This blog isn’t about product reviews and gripes of this nature, so it isn’t specifically my intention to dump on Clear Case (though if ever a tool deserved it…). Rather, the point here is that it’s important to evaluate the tooling that you’re using to do your work. Don’t just get used to whatever is thrown at you – constantly evaluate it to see if it meets your needs and continues to do so as time goes on. There is something to be said for familiarity with and mastery of a tool making you productive, but if you’ve mastered a crappy tool, you’re probably at a local maximum and you need to go exploring outside of your comfort zone.

Subtle Signs That You’re Using Bad Tooling

I’m going to phrase this in the negative, since I think most people have a pretty reasonable concept of good tooling. That is, if something makes you much more productive/happy/etc, you’re going to notice, so this is really about the difference between adequate tooling and bad tooling. Most people recognize bad tooling when it simply doesn’t work, crashes a lot, etc, but many will struggle to recognize it when it kinda works, you know, most of the time, sorta. So here are subtle signs that your tool is bad.

  1. You design process kludges around it (e.g. well, our IDE won’t color code methods, so we name them all Methodxxxxx()).
  2. You personify/anthropomorphize it in a negative way (e.g. Clear Case doesn’t like it when you try to rename a file).
  3. You’ll cut out ten minutes early at the end of the day specifically to avoid having to use it.
  4. You google for help on it and *crickets*.
  5. Developers on your team re-implement components of it rather than using it.
  6. You make excuses when explaining your usage of it.
  7. Bringing a new user up to speed on your process with it takes a long time and causes them to look at you disbelievingly or sadly.
  8. People don’t use it unless forced or people attempt to use other tools instead.
  9. You google the product and find more angry rants or posts like this one than helpful sites and blog how-tos.
  10. People on your team spend time solving the tool instead of using the tool to solve business problems.
  11. You think about it a lot when you’re using it.

So When is Tooling Good?

Apart from the obvious shouting for joy when using it and whatnot, there is a subtlety to this as well, but I think it’s mainly tied to item (11). A good tool is one that you don’t think about when using. For instance, I love Notepad++. I use it daily and quite probably hourly for a wide variety of tasks since it is my goto text editor. But the only time I ever really think about it is when I’m on a machine where it isn’t installed, and I get stuck with the regular Notepad when opening a text file. Notepad++ and its use are so second nature to me that I hardly ever think about it (with the obvious exception of when I might want to learn more about it or explore features).

If you take this advice to heart and want to constantly reassess your tooling, I’d say the single best measure is to see how frequently or infrequently you notice the tool. All of the other symptom of a bad tool bullet points are certainly relevant, but most of them are really fruit of the tree of (11). If you’re creating kludges for, making excuses about, googling or personifying a tool, the common thread is that you’re thinking about it. If, on the other hand, the tool kind of fades into the background of your daily life and allows (and helps) you to focus on other problems, it is helping you, and it is a good tool.

So don’t let Clear Case or anything else steal your innocence or ruin your life; don’t tolerate a tool that constantly forces you to think about it as you battle it. Life is too short.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

How To Keep Your Best Programmers

Getting Philosophical

The thinking manGiven that I’ve just changed jobs, it isn’t entirely surprising that I’ve had a lot of conversations recently about why I decided to do so. Generally when someone leaves a job, coworkers, managers, HR personnel, friends, and family are all interested in knowing why. Personally, I tend to give unsatisfying answers to this question, such as, “I wanted a better opportunity for career advancement,” or, “I just thought it was time for a change.” This is the corporate equivalent of “it’s not you–it’s me.” When I give this sort of answer, I’m not being diplomatic or evasive. I give the answer because I don’t really know, exactly.

Don’t get me wrong. There are always organizational gripes or annoyances anywhere you go (or depart from), and it’s always possible that someone will come along and say, “How would you like to make twice as much money doing the coolest work imaginable while working from home in your pajamas?” or that your current employer will say, “We’re going to halve your pay, force you to do horrible grunt work, and send you to Antarctica to do it.” It is certainly possible that I could have a specific reason for leaving, but that seems more the exception than the rule.

As a general practice, I like to examine my own motivations for things that I do. I think this is a good check to make sure that I’m being rational rather than impulsive or childish. So I applied this practice to my decision to move on and the result is the following post. Please note that this is a foreword explaining what got me thinking along these lines, and I generalized my opinion on my situation to the larger pool of software developers. That is, I’m not intending to say, “I’m the best and here’s how someone can keep me.” I consider my own programming talent level irrelevant to the post and prefer to think of myself as a competent and productive developer, distinguished by enthusiasm for learning and pride in my work. I don’t view myself as a “rock star,” and I generally view such prima donna self-evaluation to be counterproductive and silly.

What Others Think

Some of my favorite blog posts that I’ve read in the last several years focus on the subject of developer turnover, and I think that these provide an excellent backdrop for this subject. The oldest one that I’ll list, by Bruce Webster, is called “The Wetware Crisis: the Dead Sea Effect,” and it coins an excellent term for a phenomenon with which we’re all probably vaguely aware on either a conscious or subconscious level. The “Dead Sea Effect” is a description of some organizations’ tendency to be so focused on retention that they inadvertently retain mediocre talent while driving better talent away:

…what happens is that the more talented and effective IT engineers are the ones most likely to leave — to evaporate, if you will. They are the ones least likely to put up with the frequent stupidities and workplace problems that plague large organizations; they are also the ones most likely to have other opportunities that they can readily move to.

What tends to remain behind is the ‘residue’ — the least talented and effective IT engineers. They tend to be grateful they have a job and make fewer demands on management; even if they find the workplace unpleasant, they are the least likely to be able to find a job elsewhere. They tend to entrench themselves, becoming maintenance experts on critical systems, assuming responsibilities that no one else wants so that the organization can’t afford to let them go.

Bruce describes a paradigm in which the reason for talented people leaving will frequently be that they are tired of less talented people in positions of relative (and by default) authority telling them to do things–things that are “frequent stupidities.” There is an actual inversion of the pecking order found in meritocracies, and this leads to a dysfunctional situation that the talented either avoid or else look to escape as quickly as possible.

Bruce’s post was largely an organizational perspective; he talked about why a lot of organizations wind up with an entrenched group of mediocre senior developers, principals, and managers without touching much on the motivation for the talented to leave beyond the “frequent stupidities” comment. Alex Papadimoulis from the Daily WTF elaborates on the motivation of the talented to leave:

In virtually every job, there is a peak in the overall value (the ratio of productivity to cost) that an employee brings to his company. I call this the Value Apex.

On the first minute of the first day, an employee’s value is effectively zero. As that employee becomes acquainted with his new environment and begins to apply his skills and past experiences, his value quickly grows. This growth continues exponentially while the employee masters the business domain and shares his ideas with coworkers and management.

However, once an employee shares all of his external knowledge, learns all that there is to know about the business, and applies all of his past experiences, the growth stops. That employee, in that particular job, has become all that he can be. He has reached the value apex.

If that employee continues to work in the same job, his value will start to decline. What was once “fresh new ideas that we can’t implement today” become “the same old boring suggestions that we’re never going to do”. Prior solutions to similar problems are greeted with “yeah, we worked on that project, too” or simply dismissed as “that was five years ago, and we’ve all heard the story.” This leads towards a loss of self actualization which ends up chipping away at motivation.

Skilled developers understand this. Crossing the value apex often triggers an innate “probably time for me to move on” feeling and, after a while, leads towards inevitable resentment and an overall dislike of the job. Nothing – not even a team of on-site masseuses – can assuage this loss.

On the other hand, the unskilled tend to have a slightly different curve: Value Convergence. They eventually settle into a position of mediocrity and stay there indefinitely. The only reason their value does not decrease is because the vast amount of institutional knowledge they hoard and create.

This is a little more nuanced and interesting than the simple meritocracy inversion causing the departure of skilled developers. Alex’s explanation suggests that top programmers are only happy in jobs that provide value to them and jobs to which they provide increasing value. The best and brightest not only want to grow but also to feel that they are increasingly useful and valuable–indicative, I believe, of pride in one’s work.

In an article written a few years later titled “Bored People Quit,” Michael Lopp argues that boredom is the precursor to developers leaving:

As I’ve reflected on the regrettable departures of folks I’ve managed, hindsight allows me to point to the moment the person changed. Whether it was a detected subtle change or an outright declaration of their boredom, there was a clear sign that the work sitting in front of them was no longer interesting. And I ignored my observation. I assumed it was insignificant. He’s having a bad day. I assumed things would just get better. In reality, the boredom was a seed. What was “I’m bored” grew roots and became “I’m bored and why isn’t anyone doing anything about it?” and sprouted “I’m bored, I told my boss, and he… did nothing,” and finally bloomed into “I don’t want to work at a place where they don’t care if I’m bored.”

I think of boredom as a clock. Every second that someone on my team is bored, a second passes on this clock. After some aggregated amount of seconds that varies for every person, they look at the time, throw up their arms, and quit.

This theme of motivation focuses more on Alex’s “value provided to the employee” than “value that employee provides,” but it could certainly be argued that it includes both. Boredom implies that the developer gets little out of the task and that the perceived value that he or she is providing is low. But, beyond “value apex” considerations, bored developers have the more mundane problem of not being engaged or enjoying their work on a day to day basis.

What’s the Common Thread?

I’m going to discount obvious reasons for leaving, such as hostile work environment, below-market pay, reduction of benefits/salary, etc., as no-brainers and focus on things that drive talented developers away. So far, we’ve seen some very compelling words from a handful of people that roughly outline three motivations for departure:

  • Frustration with the inversion of meritocracy (“organization stupidities”)
  • Diminishing returns in mutual value of the work between programmer and organization
  • Simple boredom

To this list I’m going to add a few more things that were either implied in the articles above or that I’ve experienced myself or heard from coworkers:

  • Perception that current project is futile/destined for failure accompanied by organizational powerlessness to stop it
  • Lack of a mentor or anyone from whom much learning was possible
  • Promotions a matter of time rather than merit
  • No obvious path to advancement
  • Fear of being pigeon-holed into unmarketable technology
  • Red-tape organizational bureaucracy mutes positive impact that anyone can have
  • Lack of creative freedom and creative control (aka “micromanaging”)
  • Basic philosophical differences with majority of coworkers

Looking at this list, a number of these are specific instances of the points made by Bruce, Alex and Michael, so they aren’t necessarily advancements of the topic per se, though you might nod along with them and want to add some of your own to the list (and if you have some you want to add, feel free to comment). But where things get a little more interesting is that pretty much all of them, including the ones from the linked articles, fall into a desire for autonomy, mastery, or purpose. For some background, check out this video from RSA Animate. The video is great watching, but if you haven’t the time, the gist of it is that humans are not motivated economically toward self-actualization (as widely believed) but are instead driven by these three motivating factors: the desire to control one’s own work, the desire to get better at things, and the desire to work toward some goal beyond showing up for 40 hours per week and collecting a paycheck.

Frustration with organizational stupidity is usually the result of a lack of autonomy and the perception of no discernible purpose. Alex’s value apex is reached when mastery and purpose wane as motivations, and boredom with a job can quite certainly result from a lack of any of the three RSA needs being met. But rather than sum up the symptoms with these three motivating factors, I’m going to roll it all into one. You can keep your good developers by making sure they have a compelling narrative as employees.

Guaranteeing the Narrative

Bad or mediocre developers are those who are generally resigned or checked out. They often have no desire for mastery, no sense of purpose, and no interest in autonomy because they’ve given up on those things as real possibilities and have essentially struck a bad economic bargain with the organization, pay amount notwithstanding. That is, they give up on self-actualization in exchange for a company paying a mortgage, a few car payments, and a set of utilities for them. I’ve heard a friend of mine call this “golden handcuffs.” They have a pre-defined narrative at work: “I work for this company because repo-men will eventually show up if I don’t.” These aren’t necessarily bad or unproductive employees, but they’re pretty unlikely to be your best and brightest, and you can be assured that they will tend to put forth the minimum amount of effort necessary to hold up their end of the bad bargain.

These workers are easy to keep because that is their default state of affairs. Going out and finding another job is not the minimum effort required to pay the bills, so they won’t do it. They are Bruce’s “residue” and they will tend to stick around and earn obligatory promotions and pay increases by default, and, unchecked, they will eventually sabotage the RSA needs of other, newer developers on the team and thus either convert them or drive them off. The narrative that you offer them is, “Stick around, and every five years we’ll give you a promotion and a silver-plated watch.” They take it, considering the promotion and the watch to be gravy.

But when you offer that same narrative to ambitious, passionate, and talented developers, they leave. They grow bored, and bored people quit. They refuse to tolerate that organizational stupidity, and they evaporate. They look for “up or out,” and, realizing that “out” is much quicker and more appealing, they change their narrative on their own to “So long, suckers!”

You need to offer your talented developers a more appealing narrative if you want them to stay. Make sure that you take them aside and reaffirm that narrative to them frequently. And make sure the narrative is deterministic in that their own actions allow them to move toward one of the goals. Here are some narratives that might keep developers around:

  • “If you implement feature X on or ahead of schedule, we will promote you.”
  • “With the work that we’re giving you over the next few months, you’re going to become the foremost NoSQL expert in our organization.”
  • “We recognize that you have a lot of respect for Bob’s Ruby work, so we’re putting you on a project with him to serve as your mentor so that you can learn from him and get to his level.”
  • “We’re building an accounting package that’s critical to our business, and you are going to be solely responsible for the security and logging portions of it.”
  • “If your work on project Y keeps going well, we’re going to allow you to choose your next assignment based on which language you’re most interested in using/learning.”

Notice that these narratives all appeal to autonomy/mastery/purpose in various ways. Rather than dangling financial or power incentives in front of the developers, the incentives are all things like career advancement/recognition, increased autonomy, opportunities to learn and practice new things, the feeling of satisfaction you get from knowing that your work matters, etc.

And once you’ve given them some narratives, ask them what they want their own to be. In other words, “we’ll give you more responsibility for doing a good job” is a good narrative, but it may not be the one that the developer in question envisions. It may not always be possible to give the person exactly what he or she wants, but at least knowing what it is may lead to attractive compromises or alternate ideas. A new team member who says, “I want to be the department’s principal architect” may have his head in the clouds a bit, but you might be able to find a small, one-man project and say, “start by architecting this and we’ll take it from there.”

At any point, both you and the developers on your team should know their narratives. This ensures that they aren’t just periodic, feel-good measures–Michael’s “diving saves”–but constant points of job satisfaction and purpose. The developers’ employment is a constant journey that’s going somewhere, rather than a Sisyphean situation where they’re running out the clock until retirement. With this approach, you might even find that you can coax a narrative out of some “residue” employees and reignite some interest and productivity. Or perhaps defining a narrative will lead you both to realize that they are “residue” because they’ve been miscast in the first place and there are more suitable things than programming they could be doing.

Conclusion

The narratives that you define may not be perfect, but they’ll at least be a start. Don’t omit them, don’t let them atrophy and, whatever you do, don’t let an inverted meritocracy–the “residue”–interfere with the narrative of a rising star or top performer. That will catapult your group into a vicious feedback loop. Work on the narratives with the developers and refine them over the course of time. Get feedback on how the narratives are progressing and update them as needed.

Alex thinks that departure from organizations is inevitable, and that may be true, but I don’t know that I fully agree. I think that as long as talented employees have a narrative and some aspirations, their value apex need not level off. This is especially true at, say, consulting firms where new domains and ad-hoc organization models are the norm rather than the exception. But what I would take from Alex’s post is the perhaps radical idea that it is okay if the talented developer narrative doesn’t necessarily involve the company in five or ten years. That’s fine. It allows for replacement planning and general, mutual growth. Whatever the narrative may be, mark progress toward it, refine it, and make sure that your developers are working with and toward autonomy, mastery, and purpose.

No Fields Found.

By

Methods Are Little Stories – Abstractions Are Important 6

If Then, If Then, If Then

Yesterday’s post where I included Grady Booch’s comment that clean code “reads like well written prose” made me think of something I’ve been contemplating. The other day I was looking at some code and I saw the following (obfuscated):

public void GrabUmbrellaIfNecessary()
{
    if (IsItRaining())
    {
        if (DoINeedToLeave())
        {
            if (AmIParkedInTheStreet())
            {
                GrabUmbrella();
            }
        }
    }
}

I automatically started refactoring this to the following:

public void GrabUmbrellaIfNecessary()
{
    if (IsItRaining() && DoINeedToLeave() && AmIParkedInTheStreet())
        GrabUmbrella();
}

and then:

public void GrabUmbrellaIfNecessary()
{
    if (DoINeedAnUmbrella())
        GrabUmbrella();
}

private bool DoINeedAnUmbrella()
{
    return IsItRaining() && DoINeedToLeave() && AmIParkedInTheStreet();
}

To me, this keeps in line with Grady’s statement and is easy to reason about at every level. “If I need an umbrella, get it” and “I need an umbrella if it’s raining, I need to leave, and I’m parked in the street” are pieces of code so simple that one need not be a programmer to understand the logic. I think it’s hard to argue that this is less conversational than “If it’s raining then if I need to leave then if I am parked in the street then grab an umbrella.”

But does this matter? Am I just being fussy and shuffling around the code to no real benefit? Are there advantages to the “ifception” approach (thanks to Dan Martin for this term)? Why would someone prefer this style? These were the things that I found myself contemplating.

The Case For Ifception?

In order to understand possible advantages or reasons for this preference, I sought to figure out the motivation. My first thought was that someone would write code this way if they missed the week in discrete math/logic where DeMorgan’s laws and the rules of inference in Boolean Algebra were covered. However, I don’t like to assume incompetence or ignorance when the only evidence present is evidence only of a different preference than mine, so let’s dismiss that as a motivation.

The second thing that occurred to me was a lack of awareness or mistrust of the compiler short-circuiting and operations. To put it another way, they believe that all three conditions will be checked even if the first one fails, so the ifception is more efficient. But, again, this requires an assumption of ignorance, so let’s assume that the author understands conditional short-circuiting.

After that a slightly more valid motivation dawned on me (and one that doesn’t assume ignorance/incompetence) – the author loves debugger! That is, perhaps the code author likes it this way because he or she prefers to be able to step through the method and see the short circuiting or success in action.

As I poked around a little more, I found code in the same class of this form as well:

public void GrabUmbrellaIfNecessary()
{
    if (!IsItRaining())
        return;

    if (!DoINeedToLeave())
        return;

    if (!AmIParkedInTheStreet())
        return;

    GrabUmbrella();
}

Two data points now seem to point to my conclusion. I believe the motivation here is Debugger Driven Development (DDD) — a term that I’ll use to describe the approach where you write production code specifically designed to be stepped through in the debugger. This is a rather pessimistic approach since it seems to say “when you’re dealing with my code you’re going to be in the debugger… a lot… seriously, I have no idea how or even if my code works.”

I will also allow for the possibility that someone might view these approaches as inherently more readable, but I can only imagine that’s the case as a result of familiarity. In other words, this style is only more readable if it’s what you expect to see — I doubt anyone not versed in programming would gravitate toward nested conditionals and/or return statements as approachable.

If anyone can think of an additional benefit that I’m missing, please let me know. Or, in other words:

public void LetMeKnowIfIAmMissingABenefit()
{
    if (!DoesABenefitExist())
        return;

    if (HasItAlreadyBeenMentioned())
        return;

    if (!AmIMissingIt())
        return;

    PleaseLetMeKnow();
}

Does It Really Matter?

So having tentatively identified a motivation for ifceptions (DDD), is this style preferable? Harmless? To be avoided? I actually wrestled with this for a while before forming my opinion. The style is very different from what I prefer and am used to, but I try very hard not to conflate “I’m not used to that” with “That’s bad”. Doing so is the height of arrogance and will greatly hinder one’s ability to learn.

That said, the conclusion I came to was that this should be avoided if possible. And the reason it should be avoided boils down to method level abstractions. A method should tell a story. The method “GrabUmbrellaIfNecessary” tells a story — it tells you that it’s going to figure out whether an umbrella is needed and grab it if so. As a client of that method, you’re going to take it at face value that it does what it advertises, but if you do decide to drill into the method, you’re expecting to see a concise implementation of what’s advertised.

In the factored example, that’s exactly what you see. What better captures “GrabUmbrellaIfNecessary” than a single if condition for “DoINeedAnUmbrella” followed by a “GrabUmbrella” for a true evaluation? But what about the ifception example? Well, I see that there’s a condition to see whether it’s raining or not and then a scoped block of code with another conditional. Oh, okay, if it’s raining, we’ll get in there and then we’ll see if I need to leave in which case we’ll get… another scoped block of code. Okay, okay, so now, we need to know where I’m parked and, what were we doing again? Oh, right, we’re seeing whether we need to get into another scoped block of code. Ah, okay, if we’re parked in the street, here’s the meat of the method – grab the umbrella!

Notice that in the ifception reading, you see words like “scope” and “block”. I’m having to think of scoping rules, brackets, nested conditionals, control flow and other language constructs. Each of these things has exactly nothing to do with whether I should bring my umbrella or not and yet I’m thinking of them. If you look at the flattened early return method, a similar thing is happening:

If it’s not raining, then return. Okay, assuming we’re still in the method, if it’s not true that I need to leave, then return. Okay, now if I’m still in the method, if I’m not parked in the street then return. Okay, if I’m still in the method, then get the umbrella.

“Return”? “In the method”? What does that have to do with whether I need an umbrella and getting that umbrella? And why do I care about “not is raining” if I’m trying to figure out whether to use an umbrella? If it’s not not raining, do I need an umbrella? I think… ugh.

An easy argument to make at this point is “Erik, you’re a programmer and you should understand things like scope and early returns — don’t be lazy.” While this is true, it’s also true that I’m capable of squinting and making out tiny, bright yellow font against a white background. In neither case is that enjoyable or a good use of my time and effort when it’s possible simply to have more clarity and ease of reading.

Programmers are paid to solve problems by handling and abstracting away complexity. This applies to end-users and fellow programmers as well. It’s easy to lose sight of this and believe that knowledge of particular languages, frameworks, etc is the end goal, but that knowledge is simply a tool used in pursuit of the actual end goal: problem solving. If methods are written in such a way as to make them read like “well written prose”, I don’t have to focus on language syntax and details. I don’t have to be acutely aware that I’m in the middle of a method, figuring out scope and/or when to return. Instead, I can focus on the business logic of my problem domain and meeting user requirements.

Method writing techniques that make language syntax an afterthought are good abstractions. They hide the bare-metal, nitty-gritty details of writing C# (or any language/framework) as much as possible, exposing only enough to facilitate understanding of the problems being solved. And while it’s never going to be possible to avoid all scoping, returning and other such method housekeeping, you can certainly arrange your methods in such a way as to minimize and hide it, rather than to distract readers by calling attention to it.

By

Developer Process Gerrymandering

Gaming the System

As projects get a little behind schedule or perhaps a little contentious in terms of scope, I’d say it’s common for people to start taking steps to indemnify themselves against blame. I’d also say that this is fairly natural. There’s nothing more likely to attract wagging management fingers than a project behind schedule and/or over budget, so it makes sense to do a little rehearsing as to what one is going to tell them. Furthermore, it’s somewhat natural to try to showcase that one’s own efforts were a general positive even against a negative backdrop.

This is reminiscent of the “plus/minus” tracking in sports such as basketball, where it’s possible for a team to outscore its opponent when player X is on the floor, but still lose. The most common reason for this is that player X is a star, but the rest of the team is so bad that even his or her stellar play cannot overcome the shortcomings of the rest of the team. Long story short is that every developer wants to be that player X on a winning team if possible. But, absent that, being player X on a losing team is fine, so long as management and peers notice.

In such a situation (potentially “losing effort”), I’ve recently observed the following developer behaviors:

  1. Obvious (to me, if not management) estimate sandbagging.
  2. Wheedling, begging, berating, manipulating and generally badgering defect reporters into retracting reported defects.
  3. Refusal to fix obvious and embarrassing problems with the software because said solutions weren’t “in the requirements statement”.

These behaviors aren’t particularly difficult to understand in the context of seeking to create the impression of a better “plus/minus” score for a developer. Consider the following explanations:

  1. Estimate sandbagging allows a developer to finish way ahead of schedule, creating a heroic sort of aura for a time
  2. Improving code quality is one way to reduce the number of defects reported against one’s code. Browbeating people into not reporting the defects is another way to do this and more expedient in the short term, in a sense.
  3. Refusing to address something on the grounds that “I never had a requirement for this” is a nice two-fer; it creates the impression that not addressing the shortcoming was a conscious decision rather than an oversight and it also identifies a different blame target (whoever is responsible for requirements).

This frank categorization may induce a chuckle or two, but I doubt I’m breaking any drastically new ground for people who work in programming shops where this kind of thing probably occurs with frequency somewhere between “here and there” and “constant”. I do think there is value in coining some terms around it though so that we can examine the issues it causes, its own root cause, and some potential solutions.

Elbridge Gerry, An Inspiration

In the early 1800’s Massachusetts had a governor by the name of Elbridge Gerry. A Democratic-Republican by party affiliation, Gerry saw trouble in the tea leaves for his own party in the state senate and decided to do something about it. You might think that he decided to take his case to the people, touting the benefits of his party and convincing them to vote for Democratic-Republicans. You’d be wrong though.

Convincing the public to like you is hard and time consuming and there’s no guarantee of success. Gerry had a more expedient idea. He simply signed a law that radically altered the voting districts in his state in such a way that a majority would result in the state senate without actually having the voters to support it. One of the newly created districts was so contorted and weirdly shaped that political opponents said it looked like salamander. Gerry’s Salamander became the term “Gerrymander” and the phrase stuck around. Today it retains its definition as a sleazy but legal practice in which politicians stack the deck in favor of their own party as they’re dealing out political cards to the voters.

Getting back to the topic of software development, the behaviors I mentioned in the first section and their attendant explanations are all examples of a behavior similar in intentions to gerrymandering. In our world, the ideal for a software group is obviously to write good code, provide good estimates, and deliver quality software on time for a good value. As an individual the ideal is to have a good “plus/minus” within the group toward that same end — to write better than average code, delivered more quickly than average for less money than average. That’s all noble, but it’s also hard. And so some developers prefer to emulate Gerry and change the rules of the game (distort time, change definitions, employ technicalities) rather than playing it well. In the “plus/minus” world, this is like lobbying to play against the third string of the other team to pad one’s statistics.

Eliminate the “Plus/Minus”, Eliminate the Gerrymandering

So, if you’re managing a development team, how do you eliminate the counter-productive gerrymandering behavior? Eliminate the individual “plus/minus” ratio mentality. Imagine that you have a software project and you tell everyone working on it that their performance and pay/bonus were going to be dictated by the ranking on a scale of 1-10 that users gave the software in a survey (encompassing usability, quantity/quality of features, etc). Let’s think about what happens to the three behaviors at the start of the post:

  1. Estimate sandbagging is pointless because coming in ahead of a schedule that you’ve defined has no bearing on your bonus and performance review.
  2. Badgering defect reporters makes no sense because the goal is fewer defects in the user’s hands rather than fewer defects reported against the individual.
  3. Refusal to fix problems (finger pointing) is also silly because whining that someone else is at fault is cold comfort in the face of a bad review and no bonus

Now, I don’t necessarily believe that there will be entirely smooth sailing in this or any other construct. But, eliminating the focus on individual incentives and performance has some nice side effects including the one on which we’re focused: removing the tendency to try to game the system without adding value. (Another nice side effect is that you tend to downplay the unfortunate “rockstar” designation that panders to megalomaniacal personality types who tend drastically to overestimate their own irreplaceability and scare others into the same).

But How to Measure Individual Performance

Still, the question of individual performance evaluation remains. It’s all well and good to reward or punish the whole team as a unit, but there are HR org charts, career maps and other individual concerns to think of. So how do you address this while still avoiding the gerrymandering?

It’s all about the code and source control. There are a lot of metrics out there for evaluating developer productivity from the obtuse (counting lines of code generated or altered) to the anecdotal (does this guy seem to do a good job), but I’m not really aware of one that captures the truth as well as looking at someone’s changesets/commits in the context of a code base. So, what if you had some impartial and extremely knowledgeable party read through a code base, looking at its architecture and tendencies, and then read through the change sets to give each developer some kind of rating or at least partial rating.

It seems almost as if you could create a cottage industry out of this and offer it as a consulting service. This way, the consultant is truly removed from any office politics and can focus entirely on the code. If one did this as a craft, I’d imagine that the addition of data points to the general corpus would cause the process to become quite refined over the course of time and probably generate some relatively objective, interesting and meaningful metrics.

Of course, this wouldn’t be perfect. It’s only as good as the evaluator is sharp, and it has the shortcoming of not capturing “intangibles” on a project — maybe someone wrote little code but spent a lot of time helping other people with source control and other technical issues, for instance. Code and changesets are mute witnesses to the story of the project, but they don’t witness everything.

Still, I think the notion is an interesting enough one that it bears exploring. Developer (or any team member) posturing and gerrymandering are counter-productive activities that put the developer’s interests above the projects but, more problematically, misalign those two sets of interests. The combination of team effort and metric individual evaluation could help prevent that sort of thing.

By

Abstractions Are Important 5 – Type Consistency

For the fifth post in this series, I’m going to start with a mini rant.

A Digression (Rant) About Enum

Over the last couple of years, enumerations/enums have been dying a slow death in the world of code that I write. I wasn’t every really an avid user of them, but they’ve definitely been declining even for me to the point of virtual-non existence. I’m not sure exactly what it is about them or about me that’s spurring this, but I’m not sorry about it at all. I don’t miss them.

I think perhaps the motivation has been an increasing desire to use polymorphism at all levels, even when the object I’m making doesn’t seem “classworthy”. That is, why would I create a “CardSuit” enum when I can just create a first class type for Suit? I think perhaps another factor in this approach of mine is that enums tend to go hand-in-hand with switches and I consider this pairing to be an anti-pattern. Even with an enum like “Suit” that is really just representing mutual exclusion, this is inevitable somewhere:

switch(mySuit)
{
    case Suit.Spades:
        return "Spades";
    case Suit.Diamonds: 
        return "Diamonds";
//etc
}

And, why do that when I could just have something like this:

public abstract class Suit
{
    public abstract string LongSuitName { get; set; }
//etc, as Suit needs additional behaviors that would otherwise go in switches
}

This way, I can later plop suit specific designations into inheritors to my heart’s content without hunting for switch statements littered throughout the code. (Also, for language agnostic readers, C# enums are a different beast than their Java counterparts — in C#, they’re really just glorified collections of constants).

Apparently, I’m not alone in this sentiment. Just to stir things up, I googled “C# enums are evil” and came up with this interesting link from stackoverflow guru Jon Skeet:

Since working on a Java project last year, I’ve been increasingly fed up with C#’s enums. They’re really not very object oriented: they’re not type-safe (you can cast from one enum to another via a cast to their common underlying type), they don’t allow any behaviour to be specified, etc. They’re just named constant integral values. Until I played with Java 1.5’s enum support, that wouldn’t have struck me as being a problem, but (at least in some cases) enums can give you so much more.

It’s good to see that the man who defines “10” when a recruiter or someone asks you to rate yourself 1-10 on strength in a language feels the same way as me. And, I think he really nails it with the non object-oriented comment. Enums make me feel like I’m writing kernel code in C or something when I use them.

If You’re Going to Use Them…

All that said, it’s not as if they’re going to be excised from the language tomorrow — we might as well make sure they’re used in a way that makes sense if and when they are used. In a code base that I’m in from time to time (and I’m obfuscating the problem domain a bit, but leaving the intent and meaning intact), the concept of “side” exists in the sense that anything in the domain must be left or right. Think of it as though we’re shopping for a pair of shoes. Here is how “side” is represented:

public enum Side
{
    None,
    Left,
    Right,
    Both
}

At first blush, this makes sense. We might have no shoes on, the left shoe, the right shoe, or both shoes. But, ask yourself this: “what is ‘side’ and what is it made of?” Left and right are somewhat mundane, but what about none and both? Is “none” a side? Is “both”? Do I put my “none” shoe on before the left shoe and right shoe, at which time I put on the “both” shoe? Can you infer the usage of this thing from looking at it? No, you really can’t…

And the reason you can’t infer the usage is that the enum consists of two sides and two expressions of quantity of sides. This enum is a chameleon — depending on where you’re standing and what part of it you’re looking at, it can be two different things. And, I submit that this is bad — if you’re going to use enums at all, use them to represent simple, mutually exclusive concepts (like the aforementioned “Suit” in the problem domain of playing cards).

What’s the Harm?

Let’s take a look at an example (stripped down for brevity) client of this enumeration:

public class Shoe
{

}

public class EnumClient
{
    private readonly Dictionary _shoes = new Dictionary();

    public void PutOnShoes(Side side)
    {
        if (side == Side.None)
            return;
        if (side == Side.Both)
        {
            PutOnShoes(Side.Left);
            PutOnShoes(Side.Right);
        }

        _shoes[side] = new Shoe();
    }
}

What we’ve got here is kind of cute and clever. If I want the client to put a shoe on, I pass in the side. But, if I want to tell it to put both on, then I can just pass in Side.Both, and the method knows how to handle this. Sweet! The only immediate drawback is the fact that we have to handle Side.None. Clearly that should be a no-op that clients should avoid, but it’s just as valid as any of the other things from the compiler’s perspective, so we manually no-op.

But, is this cute bit of recursion actually as sweet as it seems? What if we write another method like this called “TakeShoesOff”? What if we write a class called ShoeSalesman that retrieves pairs of shoes? We almost always want him retrieving both shoes for clients, so we’re probably going to want to perpetuate this pattern into his methods as well, probably by copy and paste to save time. How about a ShoeStoreCashier ringing up pairs of shoes? We can take care of that with our good buddy copy and paste too. There’s no method about shoes that we can’t handle that way!

But, wait a second? Isn’t this starting to be kind of a code smell? If this implementation is so sweet, why does it seem all wet in the face of DRY? If you have actually done this in hundreds or thousands of places, you should be getting a sinking feeling in your stomach right about now. That feeling is the feeling that your code is suddenly and subtly incredibly brittle. This cute little encoding over the enum is actually an algorithm that is everywhere in your code.

Let’s say that I want to get in on some cuteness myself. What I’d like to do is write a method and say, “I don’t care which side you pass me, so long as a side exists — I’ll just perform an operation on the first side that I have available, if any.”

public enum Side
{
    None,
    Left,
    Right,
    Both,
    Either
}

public class NewEnumClient
{
    private readonly Dictionary _shoes = new Dictionary();

    public void RemoveShoes(Side side)
    {
        if (side == Side.None)
            return;
        if (side == Side.Both)
        {
            RemoveShoes(Side.Left);
            RemoveShoes(Side.Right);
        }
        if (side == Side.Either)
            RemoveShoes(_shoes.First().Key);

        _shoes.Remove(side);
    }
}

I’ve now extended the cuteness to be more “flexible”, and I’m pretty pleased with myself, so I go ahead and deliver this code. No unit tests break, no problems emerge that I can immediately see, so life is good. But then, weird problems start to crop up after a while. People file bug reports saying that they add shoes and see nothing on the screen or that they remove shoes but nothing is deleted from the database. It’s kind of a mystery at first, but I’m forced to get to the bottom of this as the trickle of bug reports starts becoming an avalanche.

What’s going on?!? Well, what’s going on is that all of the Cute 1.0 code can’t handle the new enum I’ve defined for Cute 2.0. So, what does it do? Well, sometimes it skips it altogether and no ops. Sometimes it adds a new key to the dictionary — a key for which it never checks. Sometimes it throws some kind of exception where it falls into a “default” state in a switch statement someone has defined. Sometimes it throws up an error message box informing the user, “This should never happen — email Erik!” for the same reason, which is doubly bad since I’m probably going to be featured on the Daily WTF.

Wow, bummer. But, no big deal. I’ll just roll up my sleeves and upgrade Cute 1.0 to Cute 2.0 everywhere. How many can there be, right? Uh oh. Hope I’m not doing anything this weekend. For everything copy and paste programming lacks in being a good idea, it makes up for in its ability to spread through a code base like kudzu. It’s too late to revert Cute 2.0 since I now have clients that depend on it, so I’d better roll up my sleeves and get pastin’ because it’s going to be a long Saturday with some Mountain Dew and 7,400 methods that I need to change.

The Real Problem

It should be fairly obvious that any “pattern” requiring you to add 5 or 10 lines to the beginning of a bunch of methods is actually an anti-pattern, particularly if those lines are similar or identical. And some might stop here and say that getting cute in the first place here was the problem, but I contend that this is just a symptom. To tie this back in with the series, the real problem is one of abstractions.

As I asked earlier, what is “side”? Really, can you tell me? I mean if I have a “Customer” object in my domain, you’d probably say something like “that models a customer of the enterprise for which this application was written” or else maybe you’d just smack me upside the head for asking such an obtuse question. But for “Side” in Cute 1.0? Without using the word “side” in your definition? You might say “well, it represents where we can put a shoe” or “it represents the directions left and right”, and you’d be correct for two of the enums values. You might say “well, it represents a number of places that you can put a shoe” or “its a flag that you need to use to tell your methods how to behave” and you’d be right for the other two values. Hmmm….

I know! It’s a doo-dad you can use to index a hash! That’s probably the most accurate way to describe it because that is unfortunately true of all 4 values. Unfortunate because for two of the values, it makes no absolutely sense to do that — but at least it’s true, so we’re getting somewhere. At the end of the day, though, I think all you can really say of Side is “it’s an enum and it’s up to clients to figure out what to do with it.” And that, my friends, makes it a bad abstraction.

Here are some other enums that would be poor abstractions

public enum Stuff
{
    Grape,
    Twelve,
    Microwave,
    Restart,
}

public enum CardSuit
{
    Spades,
    Hearts,
    Diamonds,
    Clubs,
    Black,
    Red,
    NoCard
}

public enum Dance
{
    Foxtrot,
    Tango,
    Waltz,
    DoesntLikeToDance,
    Miscellaneous
}

If you’re going to use enums, they ought to be values that are mutually exclusive, constitute a complete set, and form a clear abstraction. The first one is obviously nonsense, but the second two are enums where the person writing/extending them is about to get cute. They’re about to take an enum that has a set of mutually exclusive values and reappropriate it to use as a flag to tell their client code how to behave. Think of the code that’s about to be written — things like “if the suit is black, then do it for spades and clubs” and “if the person passed in has favorite dance property set to doesn’t like to dance…”

If you find yourself doing these kinds of cute things ask yourself what you’re really trying to accomplish. In the suit case, wouldn’t it make more sense to parameterize a method so that you could pass in multiple suits for the client code to operate on, instead of hard-coding the iteration? In the dance case, maybe it’s time for dance to be a first class object so that a “FavoriteDance” property can be set to null or a null object.

In C#, enums are already pretty much screaming “hey, there’s something called polymorphism — use that instead of me!” Once you start adding cute, one-off flag encodings to them, you’re really dropping all pretense of enumeration being a suitable abstraction and going for the gusto with fake, procedural polymorphism forced on your clients. Please, for your sake and mine if we later work together, don’t do that. The world doesn’t have a “class reserve” ala oil that may be exhausted someday — make a class. You won’t regret it because you won’t be spending your weekends upgrading The Cute.