DaedTech

Stories about Software

By

How Rational Clear Case Stole my Innocence and Nearly Ruined my Life

A Simpler Time

When I was in college in the late 90’s, we didn’t use source control. The concept existed but wasn’t pervasive in my environment. (Or, if it was, I have no recollection of using it). And, why use it? We were working projects lasting less than a semester (usually far less) and in teams of 3 or fewer for the most part. We also generally telnet-ed into servers and stored most of our source there since most students had windows machines and most of our assignments required *NIX, meaning that the “backup/persistence” component of source control was already taken care of for us. In other words, we were young and carefree.

After I left college and started working, the need for source control was explained to me and I was introduced to Visual Source Safe, a product so bad that even the company that made it didn’t use it for source control. Still, it was better than no source control. If I messed things up badly I could always go back to a sane starting point and life would be good. This was no perfect solution, but I began to see the benefits of backed up work and concurrent edit management in a way that I never had in school. I was moving up in the world and making some mistakes, but the good, honest kind that spurred maturity and growth.

As the 2000’s and my development projects went on, I was exposed to CVS and then SVN. This was a golden age of tooling for me. I could merge files and create branches. Rolling back to previous versions of software was possible as was switching to some speculative sandbox. It was easier to lead projects and/or scale them up to more developers. It even became possible to ‘rescue’ and improve old legacy projects by adding this kind of tooling. The source control schemes didn’t just become part of my subconscious — they were a pleasure to use. The sky was the limit and I felt that my potential was boundless. People expected big things from me and I was not letting them down as my career progressed from junior developer to seasoned programmer to leader.

The Gathering Storm

But storm clouds were gathering on the horizon, even if I didn’t realize it yet. In just a matter of a few short years following my promising career start, everything would change. My coding style grew sloppy and haphazard. The experimentation and tinkering that had previously defined me went by the wayside. My view on programming and software in general went from enthusiastic to apathetic to downright nihilistic. So I became lazy and negatively superstitious. I had fallen in with the wrong crowd; I had started using Rational Clear Case.

It started harmlessly enough. I was introduced to the tool on a long term project and I remember thinking “wow, cool – this separates backup from change merging”, and that was the sweet side it showed to sucker me in. But, I didn’t see it that way at the time with my unbridled optimism and sunny outlook on life. The first warning sign should have been how incredibly complicated it was to set up and how difficult it was to use, but I just wanted to fit in and not seem stupid and lame, so I ignored it.

The first thing to suffer was good coding practice. With Rational Clear Case, it isn’t cool to do things like add files to source control or rename existing files. That’s for nerds and squares. With Clear Case, you add classes to existing files and keep filenames long after they make sense. If you don’t, it doesn’t go well for you. So, my class sizes tended to grow and my names tended to rot as the code changed. Correctness, brevity and the single responsibility principle weren’t worth looking dumb in front of Clear Case, and besides, if you cross it, it gets really angry and stops working for hours or even days. Even formatting and boy-scout changes weren’t worth it because of the extreme verbosity in the version tree and spurious merge conflicts that might result. Better to touch as little code as humanly possible.

The next thing to go was my interest in playing with code and experimenting. VPN connection to Clear Case was impossibly slow, so the days of logging in from home at oddball hours to implement a solution that popped into my head were over. Clear Case would also get extremely angry if I tried to sandbox a solution in another directory, using its View.dat file to create all kinds of havoc. But it was fine, I told myself — working after hours and learning through experimentation aren’t things that cool kids do.

And, where I previously thought that the world was a basically good place, filled with tools that work dependably and helpfully, Clear Case soon showed me how wrong I was. It exposed me to a world where checkins randomly failed and even crashed the machine – a world where something called the ALBD License server having a problem could make it so that you didn’t have to (in fact couldn’t) write code for a day or two. My eyes were opened to a world where nothing can be trusted and no one even knows what’s real and what isn’t. I came to question the very purpose of doing work in the first place, since files sometimes just disappear. The only thing that made sense was to do the bare minimum to get by, and maybe not even that. Clear Case never tried to drown me in stupid, idealistic fantasies like source control that works and tools that don’t radically hamper your productivity — it was the only thing in my life that told me the truth.

Or, so I thought.

Redemption

As it turned out, I had strayed — been led astray — from the path of good software development, and my closest friends and family finally staged an intervention with me to show me the kind of programmer that Clear Case was turning me into. I denied and fought against it at first, but realized that they were right. I was on the path to being a Net Negative Producing Programmer (NNPP) or washing out of the industry altogether.

At first I thought that I’d have a gradual break with Clear Case, but I soon realized that would be impossible. I had to cut all ties with it and begin a new life, re-focused on my developer dreams and being a productive member of that community. While it seemed hard at first, I’ve never looked back. And while I’ll never regain my pre-Clear-Case innocence and youthful exuberance, my life is back on track. I am once again productive, optimistic, and happy.

What’s Wrong With You, Erik?

Okay, so that may have been a little After School Special-ish. And, nobody actually had an intervention with me; I actually just worked on projects where I used different source control. And I never actually stopped working at night, factoring my classes, giving good names, etc. So why did I write all of this?

Because Clear Case made me want to stop doing all of those things. It made many, many good practices painful to do while creating a path of least resistance right through a number of terrible practices. It encouraged sloppiness and laziness while discouraging productivity and creativity, and that’s a problem.

This blog isn’t about product reviews and gripes of this nature, so it isn’t specifically my intention to dump on Clear Case (though if ever a tool deserved it…). Rather, the point here is that it’s important to evaluate the tooling that you’re using to do your work. Don’t just get used to whatever is thrown at you – constantly evaluate it to see if it meets your needs and continues to do so as time goes on. There is something to be said for familiarity with and mastery of a tool making you productive, but if you’ve mastered a crappy tool, you’re probably at a local maximum and you need to go exploring outside of your comfort zone.

Subtle Signs That You’re Using Bad Tooling

I’m going to phrase this in the negative, since I think most people have a pretty reasonable concept of good tooling. That is, if something makes you much more productive/happy/etc, you’re going to notice, so this is really about the difference between adequate tooling and bad tooling. Most people recognize bad tooling when it simply doesn’t work, crashes a lot, etc, but many will struggle to recognize it when it kinda works, you know, most of the time, sorta. So here are subtle signs that your tool is bad.

  1. You design process kludges around it (e.g. well, our IDE won’t color code methods, so we name them all Methodxxxxx()).
  2. You personify/anthropomorphize it in a negative way (e.g. Clear Case doesn’t like it when you try to rename a file).
  3. You’ll cut out ten minutes early at the end of the day specifically to avoid having to use it.
  4. You google for help on it and *crickets*.
  5. Developers on your team re-implement components of it rather than using it.
  6. You make excuses when explaining your usage of it.
  7. Bringing a new user up to speed on your process with it takes a long time and causes them to look at you disbelievingly or sadly.
  8. People don’t use it unless forced or people attempt to use other tools instead.
  9. You google the product and find more angry rants or posts like this one than helpful sites and blog how-tos.
  10. People on your team spend time solving the tool instead of using the tool to solve business problems.
  11. You think about it a lot when you’re using it.

So When is Tooling Good?

Apart from the obvious shouting for joy when using it and whatnot, there is a subtlety to this as well, but I think it’s mainly tied to item (11). A good tool is one that you don’t think about when using. For instance, I love Notepad++. I use it daily and quite probably hourly for a wide variety of tasks since it is my goto text editor. But the only time I ever really think about it is when I’m on a machine where it isn’t installed, and I get stuck with the regular Notepad when opening a text file. Notepad++ and its use are so second nature to me that I hardly ever think about it (with the obvious exception of when I might want to learn more about it or explore features).

If you take this advice to heart and want to constantly reassess your tooling, I’d say the single best measure is to see how frequently or infrequently you notice the tool. All of the other symptom of a bad tool bullet points are certainly relevant, but most of them are really fruit of the tree of (11). If you’re creating kludges for, making excuses about, googling or personifying a tool, the common thread is that you’re thinking about it. If, on the other hand, the tool kind of fades into the background of your daily life and allows (and helps) you to focus on other problems, it is helping you, and it is a good tool.

So don’t let Clear Case or anything else steal your innocence or ruin your life; don’t tolerate a tool that constantly forces you to think about it as you battle it. Life is too short.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

How To Keep Your Best Programmers

Getting Philosophical

The thinking manGiven that I’ve just changed jobs, it isn’t entirely surprising that I’ve had a lot of conversations recently about why I decided to do so. Generally when someone leaves a job, coworkers, managers, HR personnel, friends, and family are all interested in knowing why. Personally, I tend to give unsatisfying answers to this question, such as, “I wanted a better opportunity for career advancement,” or, “I just thought it was time for a change.” This is the corporate equivalent of “it’s not you–it’s me.” When I give this sort of answer, I’m not being diplomatic or evasive. I give the answer because I don’t really know, exactly.

Don’t get me wrong. There are always organizational gripes or annoyances anywhere you go (or depart from), and it’s always possible that someone will come along and say, “How would you like to make twice as much money doing the coolest work imaginable while working from home in your pajamas?” or that your current employer will say, “We’re going to halve your pay, force you to do horrible grunt work, and send you to Antarctica to do it.” It is certainly possible that I could have a specific reason for leaving, but that seems more the exception than the rule.

As a general practice, I like to examine my own motivations for things that I do. I think this is a good check to make sure that I’m being rational rather than impulsive or childish. So I applied this practice to my decision to move on and the result is the following post. Please note that this is a foreword explaining what got me thinking along these lines, and I generalized my opinion on my situation to the larger pool of software developers. That is, I’m not intending to say, “I’m the best and here’s how someone can keep me.” I consider my own programming talent level irrelevant to the post and prefer to think of myself as a competent and productive developer, distinguished by enthusiasm for learning and pride in my work. I don’t view myself as a “rock star,” and I generally view such prima donna self-evaluation to be counterproductive and silly.

What Others Think

Some of my favorite blog posts that I’ve read in the last several years focus on the subject of developer turnover, and I think that these provide an excellent backdrop for this subject. The oldest one that I’ll list, by Bruce Webster, is called “The Wetware Crisis: the Dead Sea Effect,” and it coins an excellent term for a phenomenon with which we’re all probably vaguely aware on either a conscious or subconscious level. The “Dead Sea Effect” is a description of some organizations’ tendency to be so focused on retention that they inadvertently retain mediocre talent while driving better talent away:

…what happens is that the more talented and effective IT engineers are the ones most likely to leave — to evaporate, if you will. They are the ones least likely to put up with the frequent stupidities and workplace problems that plague large organizations; they are also the ones most likely to have other opportunities that they can readily move to.

What tends to remain behind is the ‘residue’ — the least talented and effective IT engineers. They tend to be grateful they have a job and make fewer demands on management; even if they find the workplace unpleasant, they are the least likely to be able to find a job elsewhere. They tend to entrench themselves, becoming maintenance experts on critical systems, assuming responsibilities that no one else wants so that the organization can’t afford to let them go.

Bruce describes a paradigm in which the reason for talented people leaving will frequently be that they are tired of less talented people in positions of relative (and by default) authority telling them to do things–things that are “frequent stupidities.” There is an actual inversion of the pecking order found in meritocracies, and this leads to a dysfunctional situation that the talented either avoid or else look to escape as quickly as possible.

Bruce’s post was largely an organizational perspective; he talked about why a lot of organizations wind up with an entrenched group of mediocre senior developers, principals, and managers without touching much on the motivation for the talented to leave beyond the “frequent stupidities” comment. Alex Papadimoulis from the Daily WTF elaborates on the motivation of the talented to leave:

In virtually every job, there is a peak in the overall value (the ratio of productivity to cost) that an employee brings to his company. I call this the Value Apex.

On the first minute of the first day, an employee’s value is effectively zero. As that employee becomes acquainted with his new environment and begins to apply his skills and past experiences, his value quickly grows. This growth continues exponentially while the employee masters the business domain and shares his ideas with coworkers and management.

However, once an employee shares all of his external knowledge, learns all that there is to know about the business, and applies all of his past experiences, the growth stops. That employee, in that particular job, has become all that he can be. He has reached the value apex.

If that employee continues to work in the same job, his value will start to decline. What was once “fresh new ideas that we can’t implement today” become “the same old boring suggestions that we’re never going to do”. Prior solutions to similar problems are greeted with “yeah, we worked on that project, too” or simply dismissed as “that was five years ago, and we’ve all heard the story.” This leads towards a loss of self actualization which ends up chipping away at motivation.

Skilled developers understand this. Crossing the value apex often triggers an innate “probably time for me to move on” feeling and, after a while, leads towards inevitable resentment and an overall dislike of the job. Nothing – not even a team of on-site masseuses – can assuage this loss.

On the other hand, the unskilled tend to have a slightly different curve: Value Convergence. They eventually settle into a position of mediocrity and stay there indefinitely. The only reason their value does not decrease is because the vast amount of institutional knowledge they hoard and create.

This is a little more nuanced and interesting than the simple meritocracy inversion causing the departure of skilled developers. Alex’s explanation suggests that top programmers are only happy in jobs that provide value to them and jobs to which they provide increasing value. The best and brightest not only want to grow but also to feel that they are increasingly useful and valuable–indicative, I believe, of pride in one’s work.

In an article written a few years later titled “Bored People Quit,” Michael Lopp argues that boredom is the precursor to developers leaving:

As I’ve reflected on the regrettable departures of folks I’ve managed, hindsight allows me to point to the moment the person changed. Whether it was a detected subtle change or an outright declaration of their boredom, there was a clear sign that the work sitting in front of them was no longer interesting. And I ignored my observation. I assumed it was insignificant. He’s having a bad day. I assumed things would just get better. In reality, the boredom was a seed. What was “I’m bored” grew roots and became “I’m bored and why isn’t anyone doing anything about it?” and sprouted “I’m bored, I told my boss, and he… did nothing,” and finally bloomed into “I don’t want to work at a place where they don’t care if I’m bored.”

I think of boredom as a clock. Every second that someone on my team is bored, a second passes on this clock. After some aggregated amount of seconds that varies for every person, they look at the time, throw up their arms, and quit.

This theme of motivation focuses more on Alex’s “value provided to the employee” than “value that employee provides,” but it could certainly be argued that it includes both. Boredom implies that the developer gets little out of the task and that the perceived value that he or she is providing is low. But, beyond “value apex” considerations, bored developers have the more mundane problem of not being engaged or enjoying their work on a day to day basis.

What’s the Common Thread?

I’m going to discount obvious reasons for leaving, such as hostile work environment, below-market pay, reduction of benefits/salary, etc., as no-brainers and focus on things that drive talented developers away. So far, we’ve seen some very compelling words from a handful of people that roughly outline three motivations for departure:

  • Frustration with the inversion of meritocracy (“organization stupidities”)
  • Diminishing returns in mutual value of the work between programmer and organization
  • Simple boredom

To this list I’m going to add a few more things that were either implied in the articles above or that I’ve experienced myself or heard from coworkers:

  • Perception that current project is futile/destined for failure accompanied by organizational powerlessness to stop it
  • Lack of a mentor or anyone from whom much learning was possible
  • Promotions a matter of time rather than merit
  • No obvious path to advancement
  • Fear of being pigeon-holed into unmarketable technology
  • Red-tape organizational bureaucracy mutes positive impact that anyone can have
  • Lack of creative freedom and creative control (aka “micromanaging”)
  • Basic philosophical differences with majority of coworkers

Looking at this list, a number of these are specific instances of the points made by Bruce, Alex and Michael, so they aren’t necessarily advancements of the topic per se, though you might nod along with them and want to add some of your own to the list (and if you have some you want to add, feel free to comment). But where things get a little more interesting is that pretty much all of them, including the ones from the linked articles, fall into a desire for autonomy, mastery, or purpose. For some background, check out this video from RSA Animate. The video is great watching, but if you haven’t the time, the gist of it is that humans are not motivated economically toward self-actualization (as widely believed) but are instead driven by these three motivating factors: the desire to control one’s own work, the desire to get better at things, and the desire to work toward some goal beyond showing up for 40 hours per week and collecting a paycheck.

Frustration with organizational stupidity is usually the result of a lack of autonomy and the perception of no discernible purpose. Alex’s value apex is reached when mastery and purpose wane as motivations, and boredom with a job can quite certainly result from a lack of any of the three RSA needs being met. But rather than sum up the symptoms with these three motivating factors, I’m going to roll it all into one. You can keep your good developers by making sure they have a compelling narrative as employees.

Guaranteeing the Narrative

Bad or mediocre developers are those who are generally resigned or checked out. They often have no desire for mastery, no sense of purpose, and no interest in autonomy because they’ve given up on those things as real possibilities and have essentially struck a bad economic bargain with the organization, pay amount notwithstanding. That is, they give up on self-actualization in exchange for a company paying a mortgage, a few car payments, and a set of utilities for them. I’ve heard a friend of mine call this “golden handcuffs.” They have a pre-defined narrative at work: “I work for this company because repo-men will eventually show up if I don’t.” These aren’t necessarily bad or unproductive employees, but they’re pretty unlikely to be your best and brightest, and you can be assured that they will tend to put forth the minimum amount of effort necessary to hold up their end of the bad bargain.

These workers are easy to keep because that is their default state of affairs. Going out and finding another job is not the minimum effort required to pay the bills, so they won’t do it. They are Bruce’s “residue” and they will tend to stick around and earn obligatory promotions and pay increases by default, and, unchecked, they will eventually sabotage the RSA needs of other, newer developers on the team and thus either convert them or drive them off. The narrative that you offer them is, “Stick around, and every five years we’ll give you a promotion and a silver-plated watch.” They take it, considering the promotion and the watch to be gravy.

But when you offer that same narrative to ambitious, passionate, and talented developers, they leave. They grow bored, and bored people quit. They refuse to tolerate that organizational stupidity, and they evaporate. They look for “up or out,” and, realizing that “out” is much quicker and more appealing, they change their narrative on their own to “So long, suckers!”

You need to offer your talented developers a more appealing narrative if you want them to stay. Make sure that you take them aside and reaffirm that narrative to them frequently. And make sure the narrative is deterministic in that their own actions allow them to move toward one of the goals. Here are some narratives that might keep developers around:

  • “If you implement feature X on or ahead of schedule, we will promote you.”
  • “With the work that we’re giving you over the next few months, you’re going to become the foremost NoSQL expert in our organization.”
  • “We recognize that you have a lot of respect for Bob’s Ruby work, so we’re putting you on a project with him to serve as your mentor so that you can learn from him and get to his level.”
  • “We’re building an accounting package that’s critical to our business, and you are going to be solely responsible for the security and logging portions of it.”
  • “If your work on project Y keeps going well, we’re going to allow you to choose your next assignment based on which language you’re most interested in using/learning.”

Notice that these narratives all appeal to autonomy/mastery/purpose in various ways. Rather than dangling financial or power incentives in front of the developers, the incentives are all things like career advancement/recognition, increased autonomy, opportunities to learn and practice new things, the feeling of satisfaction you get from knowing that your work matters, etc.

And once you’ve given them some narratives, ask them what they want their own to be. In other words, “we’ll give you more responsibility for doing a good job” is a good narrative, but it may not be the one that the developer in question envisions. It may not always be possible to give the person exactly what he or she wants, but at least knowing what it is may lead to attractive compromises or alternate ideas. A new team member who says, “I want to be the department’s principal architect” may have his head in the clouds a bit, but you might be able to find a small, one-man project and say, “start by architecting this and we’ll take it from there.”

At any point, both you and the developers on your team should know their narratives. This ensures that they aren’t just periodic, feel-good measures–Michael’s “diving saves”–but constant points of job satisfaction and purpose. The developers’ employment is a constant journey that’s going somewhere, rather than a Sisyphean situation where they’re running out the clock until retirement. With this approach, you might even find that you can coax a narrative out of some “residue” employees and reignite some interest and productivity. Or perhaps defining a narrative will lead you both to realize that they are “residue” because they’ve been miscast in the first place and there are more suitable things than programming they could be doing.

Conclusion

The narratives that you define may not be perfect, but they’ll at least be a start. Don’t omit them, don’t let them atrophy and, whatever you do, don’t let an inverted meritocracy–the “residue”–interfere with the narrative of a rising star or top performer. That will catapult your group into a vicious feedback loop. Work on the narratives with the developers and refine them over the course of time. Get feedback on how the narratives are progressing and update them as needed.

Alex thinks that departure from organizations is inevitable, and that may be true, but I don’t know that I fully agree. I think that as long as talented employees have a narrative and some aspirations, their value apex need not level off. This is especially true at, say, consulting firms where new domains and ad-hoc organization models are the norm rather than the exception. But what I would take from Alex’s post is the perhaps radical idea that it is okay if the talented developer narrative doesn’t necessarily involve the company in five or ten years. That’s fine. It allows for replacement planning and general, mutual growth. Whatever the narrative may be, mark progress toward it, refine it, and make sure that your developers are working with and toward autonomy, mastery, and purpose.

No Fields Found.

By

Connecting to TFS Server with Different Credentials

Hello, all. My apologies for the unannounced posting hiatus. I’ve recently started a new employment venture and I was also on vacation last week, touring the Virgina/Pennsylvania/DC area. Going forward, I’m going to be doing more web-related stuff and probably be a little more of a jack of all trades, so stay tuned for posts along those lines.

Today, I’m going to post under “lessons learned” for getting rid of an annoyance. Every once in awhile I have occasion to connect to a TFS Server using different credentials from those with which I have logged in. Whenever I do this, I am prompted for credentials when connecting to source control, which can be fairly annoying. Well, thanks to Mark Smith for a recent tip for how to avoid this.

In Windows 7/Server, go to the Control Panel and choose the “Credential Manager”. In a strange quirk, “Credential Manager” isn’t actually visible in the default control panel view, and so you have to click “View By” and select something other than “Category”. Once you’ve done this, you should see the Credential Manager.

In Credential Manager, go to “Add a Windows Credential” and enter the computer name, along with your login credentials for it. You’ll probably want to include the domain, so your username will be YOURDOMAIN\YOURUSERNAME. The domain isn’t strictly necessary if both logins are on the same domain, but I think a common scenario is you’re logged in to the local machine and connecting to a TFS server on a domain somewhere.

Once you’re done, you might need to restart Visual Studio. (Truthfully, I don’t know because I had already closed it when I was doing this).

Richard Banks has posted this same process with screenshots (minus the bit about Credential Manager not showing up by default).

And, that’s it. Spend 30 seconds doing it and save yourself daily or even more frequent annoyance from here forward. Cheers!

By

Methods Are Little Stories – Abstractions Are Important 6

If Then, If Then, If Then

Yesterday’s post where I included Grady Booch’s comment that clean code “reads like well written prose” made me think of something I’ve been contemplating. The other day I was looking at some code and I saw the following (obfuscated):

public void GrabUmbrellaIfNecessary()
{
    if (IsItRaining())
    {
        if (DoINeedToLeave())
        {
            if (AmIParkedInTheStreet())
            {
                GrabUmbrella();
            }
        }
    }
}

I automatically started refactoring this to the following:

public void GrabUmbrellaIfNecessary()
{
    if (IsItRaining() && DoINeedToLeave() && AmIParkedInTheStreet())
        GrabUmbrella();
}

and then:

public void GrabUmbrellaIfNecessary()
{
    if (DoINeedAnUmbrella())
        GrabUmbrella();
}

private bool DoINeedAnUmbrella()
{
    return IsItRaining() && DoINeedToLeave() && AmIParkedInTheStreet();
}

To me, this keeps in line with Grady’s statement and is easy to reason about at every level. “If I need an umbrella, get it” and “I need an umbrella if it’s raining, I need to leave, and I’m parked in the street” are pieces of code so simple that one need not be a programmer to understand the logic. I think it’s hard to argue that this is less conversational than “If it’s raining then if I need to leave then if I am parked in the street then grab an umbrella.”

But does this matter? Am I just being fussy and shuffling around the code to no real benefit? Are there advantages to the “ifception” approach (thanks to Dan Martin for this term)? Why would someone prefer this style? These were the things that I found myself contemplating.

The Case For Ifception?

In order to understand possible advantages or reasons for this preference, I sought to figure out the motivation. My first thought was that someone would write code this way if they missed the week in discrete math/logic where DeMorgan’s laws and the rules of inference in Boolean Algebra were covered. However, I don’t like to assume incompetence or ignorance when the only evidence present is evidence only of a different preference than mine, so let’s dismiss that as a motivation.

The second thing that occurred to me was a lack of awareness or mistrust of the compiler short-circuiting and operations. To put it another way, they believe that all three conditions will be checked even if the first one fails, so the ifception is more efficient. But, again, this requires an assumption of ignorance, so let’s assume that the author understands conditional short-circuiting.

After that a slightly more valid motivation dawned on me (and one that doesn’t assume ignorance/incompetence) – the author loves debugger! That is, perhaps the code author likes it this way because he or she prefers to be able to step through the method and see the short circuiting or success in action.

As I poked around a little more, I found code in the same class of this form as well:

public void GrabUmbrellaIfNecessary()
{
    if (!IsItRaining())
        return;

    if (!DoINeedToLeave())
        return;

    if (!AmIParkedInTheStreet())
        return;

    GrabUmbrella();
}

Two data points now seem to point to my conclusion. I believe the motivation here is Debugger Driven Development (DDD) — a term that I’ll use to describe the approach where you write production code specifically designed to be stepped through in the debugger. This is a rather pessimistic approach since it seems to say “when you’re dealing with my code you’re going to be in the debugger… a lot… seriously, I have no idea how or even if my code works.”

I will also allow for the possibility that someone might view these approaches as inherently more readable, but I can only imagine that’s the case as a result of familiarity. In other words, this style is only more readable if it’s what you expect to see — I doubt anyone not versed in programming would gravitate toward nested conditionals and/or return statements as approachable.

If anyone can think of an additional benefit that I’m missing, please let me know. Or, in other words:

public void LetMeKnowIfIAmMissingABenefit()
{
    if (!DoesABenefitExist())
        return;

    if (HasItAlreadyBeenMentioned())
        return;

    if (!AmIMissingIt())
        return;

    PleaseLetMeKnow();
}

Does It Really Matter?

So having tentatively identified a motivation for ifceptions (DDD), is this style preferable? Harmless? To be avoided? I actually wrestled with this for a while before forming my opinion. The style is very different from what I prefer and am used to, but I try very hard not to conflate “I’m not used to that” with “That’s bad”. Doing so is the height of arrogance and will greatly hinder one’s ability to learn.

That said, the conclusion I came to was that this should be avoided if possible. And the reason it should be avoided boils down to method level abstractions. A method should tell a story. The method “GrabUmbrellaIfNecessary” tells a story — it tells you that it’s going to figure out whether an umbrella is needed and grab it if so. As a client of that method, you’re going to take it at face value that it does what it advertises, but if you do decide to drill into the method, you’re expecting to see a concise implementation of what’s advertised.

In the factored example, that’s exactly what you see. What better captures “GrabUmbrellaIfNecessary” than a single if condition for “DoINeedAnUmbrella” followed by a “GrabUmbrella” for a true evaluation? But what about the ifception example? Well, I see that there’s a condition to see whether it’s raining or not and then a scoped block of code with another conditional. Oh, okay, if it’s raining, we’ll get in there and then we’ll see if I need to leave in which case we’ll get… another scoped block of code. Okay, okay, so now, we need to know where I’m parked and, what were we doing again? Oh, right, we’re seeing whether we need to get into another scoped block of code. Ah, okay, if we’re parked in the street, here’s the meat of the method – grab the umbrella!

Notice that in the ifception reading, you see words like “scope” and “block”. I’m having to think of scoping rules, brackets, nested conditionals, control flow and other language constructs. Each of these things has exactly nothing to do with whether I should bring my umbrella or not and yet I’m thinking of them. If you look at the flattened early return method, a similar thing is happening:

If it’s not raining, then return. Okay, assuming we’re still in the method, if it’s not true that I need to leave, then return. Okay, now if I’m still in the method, if I’m not parked in the street then return. Okay, if I’m still in the method, then get the umbrella.

“Return”? “In the method”? What does that have to do with whether I need an umbrella and getting that umbrella? And why do I care about “not is raining” if I’m trying to figure out whether to use an umbrella? If it’s not not raining, do I need an umbrella? I think… ugh.

An easy argument to make at this point is “Erik, you’re a programmer and you should understand things like scope and early returns — don’t be lazy.” While this is true, it’s also true that I’m capable of squinting and making out tiny, bright yellow font against a white background. In neither case is that enjoyable or a good use of my time and effort when it’s possible simply to have more clarity and ease of reading.

Programmers are paid to solve problems by handling and abstracting away complexity. This applies to end-users and fellow programmers as well. It’s easy to lose sight of this and believe that knowledge of particular languages, frameworks, etc is the end goal, but that knowledge is simply a tool used in pursuit of the actual end goal: problem solving. If methods are written in such a way as to make them read like “well written prose”, I don’t have to focus on language syntax and details. I don’t have to be acutely aware that I’m in the middle of a method, figuring out scope and/or when to return. Instead, I can focus on the business logic of my problem domain and meeting user requirements.

Method writing techniques that make language syntax an afterthought are good abstractions. They hide the bare-metal, nitty-gritty details of writing C# (or any language/framework) as much as possible, exposing only enough to facilitate understanding of the problems being solved. And while it’s never going to be possible to avoid all scoping, returning and other such method housekeeping, you can certainly arrange your methods in such a way as to minimize and hide it, rather than to distract readers by calling attention to it.

By

Learning to Love The Linq Any() Method

There is a Linq extension method that I managed to miss for a long time: Any(). It’s fairly natural to miss this, I believe, since the paradigm of collections tends to be fully ingrained in our way of doing things. That is, if I want to see if any members of a collection match a certain criteria, I do something like this:

bool doesDirectExist = false;
foreach (var myFeature in features)
    doesDirectExist |= myFeature.ID == "Direct";

I cycle through my collection of features looking for “Direct” and, if I find one, the boolean flag is flipped to true. Otherwise, it remains false. If I wanted to get a little more efficient, I could break if the flag got flipped or do this in a two-condition while loop.

This hearkens back to the days of storing values in simple arrays in older languages than C# and before the existence of IEnumerable, foreach() and other such constructs. For those of us who learned this way, iterate and check is quite natural.

But then, Linq came along and gave us the handy-dandy Where(Func<T, bool>) extension method:

bool doesDirectExist = features.Where(f => f.ID == "Direct").Count() > 0;

This accomplishes the same thing, but as a one-liner. That in and of itself is nice, but it gets doubly nice when you consider that the short-circuit optimization is built in to the implementation of Where(). Where() returns an enumeration, which is more or less a collection (yes, yes, I understand that it’s not really, but that’s how we tend to regard it), so the natural seeming thing to do is query it for a count. I remained content with this for a long time, until I discovered Any().

I can achieve the same thing with:

bool doesDirectExist = features.Where(f => f.ID == "Direct").Any();

This is a lot more semantically clear. I’m not really interested in an integer comparison — I want to know whether there are any features matching my criteria. It may not seem important, but cutting down on noise in the code is very important for promoting readability since, in the excellent words of Grady Booch, “clean code reads like well-written prose.” In other words, this code says “do any features exist where the id is equal to ‘direct’?” as opposed to “is the count of features with id equal to ‘direct’ greater than zero?” The former reads a lot more conversationally than the latter.

But, we have one more improvement to make here:

bool doesDirectExist = features.Any(f => f.ID == "Direct");

In terms of prose-like, conversational reading, this is pretty much still “do any features exist where the id is equal to ‘direct’?” The “where” is implied here, I would say. But, now the code is more succinct, and there is one fewer method call. Brevity is generally good.

I’ve been using this for a while now, but I still encounter old code of mine where I wasn’t aware of this little gem and I encounter code written by others who aren’t aware of it. So, if you’re one of those people, be aware of it now and use in good health!

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.