DaedTech

Stories about Software

By

Favor Outcomes Over Rules

Cargo Cults and Angry Monkeys

If you have never heard the term “Cargo Cult Programming“, it’s an it’s an excellent way to describe blindly following processes without understanding the theory behind them. The term originates from a story about aboriginal islanders during World War II. During the war, cargo planes regularly (and from their perspective, magically) arrived with supplies and food that benefited these islanders and then after the war the planes stopped coming. This apparently didn’t stop the aboriginals from building ad-hoc landing strips and the like in hopes of ‘summoning’ more planes with supplies. If that story isn’t to your liking, there is another one of questionable accuracy about monkeys and bananas.

I’ve made a fair number of posts lately that address subjects near and dear to programmers while still having broader reach and this is certainly another such subject. Doing things without understand why is generally the province of the incurious or the busy, but I think it’s worth generally forcing yourself to ask why you’re doing pretty much anything, particularly if you’re in a fairly cerebral line of work. As professionals, it behooves us to realize that we want supplies or that we want not to get sprayed rather than thinking that building landing strips and beating up monkeys are just things that we do and such is life.

In fact, I’d venture to say that we pride ourselves in this sort of thing as we advance in our careers. As programmers, we even help our users understand this concept. “I understand that you normally push the green button three times and hit the backspace key while pressing the mute button, but what are you actually trying to do when you do all that?” How often have you asked a user something like this? In essence you’re saying “forget the process for a minute — what’s the larger goal here?”

Do Unto Others

It then bears asking, if we pride ourselves on critical thinking and we seek to help users toward that end, why do we seem to encourage each other to dance for supplies and spray our team members with cold water? Why do we issue cryptic cargo cult orders to other programmers when we understand our own rationale? Let me give an example.

A few years back I was setting up a new machine for work on a project to which I was new, and the person guiding me through the setup told me that I needed to install KDiff, a spectacularly average comparison tool that even then hadn’t been revved or maintained in a few years. Now, I have nothing, per se, against KDiff, and the price is right, but this struck me as a very… specific… order. What difference did it make what I used for diff? I had used a paid version of Beyond Compare prior to that and always liked it, but when it comes to tooling I do enjoy poking around and so I didn’t mind trying a different tool.

I’m pickier about some things than others, so I just said “Bob’s Your Uncle,” installed KDiff and didn’t think about it again for a long time. It was actually many months later that I was sitting in on a code review for a developer who had just started on the project and found out the reasoning behind this cargo cult practice. The developer whose code was being reviewed casually opened BeyondCompare and one of the more tenured reviewers than me promptly freaked out that he wasn’t using KDiff. Bemused, I tuned in to hear the answer to a question that I’d completely forgotten about. The issue is that the project had a handful of XML files containing application meta-data that were source controlled and that some unnamed developer had, at some point, done something, presumably with some sort of diff tool, that had allegedly hosed one of these files by screwing up the text encoding. Nobody in the room, including the freaker-outer, knew who this was or exactly how or when it had happened, but nonetheless, it had been written in stone that from that day, Thou Shalt Use KDiff if you want to avoid punishment in the form of Biblical plagues.

This certainly wasn’t my only experience with such a thing. More recently, I overheard that the procedure for editing a particular file in source control was to grab the latest, edit it really fast and then check it in immediately. I was a bit perplexed by this until I learned that the goal was to avoid merge conflicts as this was a commonly edited file. I thought, why not just say in the first place that a lot of people edit this file and that merge conflicts are bad, in the same way that I had previously thought “why not just tell people that you’re worried about messed up encodings?”

And that line of questioning really drives to the heart of the matter here. Developers are generally quite skilled and pretty intelligent people and, more importantly, they tend to be be people that like to solve problems. So why not give them the benefit of the doubt when you’re working with them? Instead of giving your peers rote procedures to follow because they happened to work for you once, why not explain the problem and say “this is how I solved it, but if you have a better idea, I’m all ears!”

And you know what? They might. What if instead of forcing people to use KDiff, someone had written a validation script for the offending file that prevented checkins of a badly formed edit? What if instead of having flash edit-checkins of a file, the design of the application were altered to eliminate the contention and potential for error around that file? I suggest these things because I’m a fan of making the bad impossible. Are they the greatest ideas? Are they even practical in those situations? I honestly don’t know, but they might be approaches that have the advantage of placing fewer restrictions on developers or meaning one less thing to remember.

I would encourage you to trust your peers to grasp the bigger picture whenever possible (unless perhaps they’ve demonstrated that this trust isn’t warranted). As I say in the title, it seems like a good idea to favor describing desired outcomes over inventing rules to be memorized. With the former approach you free people you work with to be creative. With the latter approach, you constrain them and possibly even stress them out or frustrate them. And who knows, they may even surprise you with ideas and solutions that make your life easier.

By

Regions Are a Code Smell

There’s a blogger named Iris Classon that writes a series of posts called “stupid questions”, where she essentially posts a discussion-fueling question once per day. Recently, I noticed one there called “Should I Use Regions in my Code?”, and the discussion that ensued seemed to be one of whether regions were helpful in organizing code or whether they constituted a code smell.

For those of you who are not C# developers, a C# region is a preprocessor directive that the Visual Studio IDE (and probably others) uses to provide entry points for collapsing and hiding code. Eclipse lets you collapse methods, and so does VS, but VS also gives you this way of pre-defining larger segments of code to collapse as well. Here is a before and after look:

(As an aside, if you use CodeRush, it makes the regions look much nicer when not collapsed — see image at the bottom of the post)

I have an odd position on this. I’ve gotten used to the practice because it’s simply been the coding standard on most projects that I’ve worked in the last few years, and so I find myself doing habitually even when working on my own stuff. But, I definitely think they’re a code smell. Actually, let me refine that. I think regions are more of a code deodorant or code cologne. When we get up in the morning, we shower and put on deodorant and maybe cologne/perfume before going about our daily business (most do, anyway). And not to be gross or cynical, but the reason that we do this is that we kind of expect that we’re going to naturally gravitate toward stinking throughout the day and we’re engaging in some preventative medicine.

This is how I view regions in C# code, in a sense. Making them a coding standard or best practice of sorts is like teaching your children (developers, in the metaphor) that not bathing is fine, so long as they religiously wear cologne. So, in the coding world, you’re saying to developers, “Put in your regions first so that I don’t have to look at your unwieldy methods, haphazard organization and gigantic classes once you inevitably write them.” You’re absolving them of the responsibility for keeping their code clean by providing and, in fact, mandating a way to make it look clean without being clean.

So how do I justify my hypocrisy on this subject of using them even while thinking that they tend to be problematic? Well, at the heart of it lies my striving for Clean Code, following SRP, small classes, and above all, TDD. When you practice TDD, it’s pretty hard to write bloated classes with lots of overloads, long methods and unnecessary state. TDD puts natural pressure on your code to stay lean, compact and focused in much the same way that regions remove that same pressure. It isn’t unusual for me to write classes and region them and to have the regions with their carriage returns before and after account for 20% of the lines of code in my class. To go back to the hygiene metaphor, I’m like someone who showers quite often and doesn’t sweat, but still wears deodorant and/or cologne. I’m engaging in a preventative measure that’s largely pointless but does no harm.

In the end, I have no interest in railing against regions. I don’t think that people who make messes and use regions to hide them are going to stop making messes if you tell them to stop using regions. I also don’t think using regions is inherently problematic; it can be nice to be able to hide whole conceptual groups of methods that don’t interest you for the moment when looking at a class. But I definitely think it bears mentioning that from a maintainability perspective, regions do not make your 800 or 8000 line classes any less awful and smelly than they would be a in language without regions.

By

The Death of the Obligatory Comment

The Ben Franklin Effect

Have you ever started looking for something, let’s say your wallet, and been fairly certain that it wasn’t in your house? You started looking in all of the most likely places and progressed with increasing glumness to less and less likely landing spots. Perhaps you eventually wound up in the crawl space that you hadn’t entered since six months prior to losing your wallet. If you’ve had an experience like that, do you recall a point at which you thought to yourself, “this is completely pointless” and yet you kept doing it anyway?

Even if you can’t recall an experience like this, I imagine that you’ve seen similar stories unfold in countless other places. In politics and government for example, can you count the number of times some matter of policy has proven to be an obvious mistake and it keeps going anyway like an unstoppable juggernaut with resolute phrases like “stay the course” uttered as sober non sequitur? In the end it all seems silly; it seems to be a way to say “we know this is a mistake and we’re going to keep doing it anyway.”

I believe that this mystifying behavior has two main sources of persistence as a mainstay in the human condition. The first is that without careful consideration, we tend toward logical fallacy and this is an example of the fallacy “appeal to consequences” (aka “wishful thinking”). In other words, “I’m going to look for my wallet in the crawlspace because I believe it’s in there since it would be a real bummer if it weren’t.” We tend to double down on our mistakes because we really wish we hadn’t made them.

The second source of this is a truly fascinating study of our motivations as humans for our actions and interactions. A blog I really like called “You Are Not So Smart” defines this as “the Ben Franklin Effect“. I highly recommend a start-to-finish read of this post, and will warn you here that my next paragraph will be a spoiler, so go read it, please.

Oh, hi there. You’re back? Cool! As you now know, the premise of the post is that Ben Franklin figured out a way to turn someone who didn’t like him (a “hater”) into an ally. He asked that person to do him a favor… and it worked. As it turns out, we tend to like people because we do them favors, rather than doing them favors because we like them. The reason for this is that we construct narratives about our motivations and preferences that rationalize our past decisions; “I must like that guy since otherwise me doing him a favor would have been stupid and since I’m obviously not stupid…” (for you rhetoric buffs, this is another logical fallacy called “begging the question”)

David McRaney puts this nicely in his post:

If you live in the Deep South you might buy a high-rise pickup and a set of truck nuts. If you live in San Francisco you might buy a Prius and a bike rack. Whatever are the easiest to obtain, loudest forms of the ideals you aspire to portray become the things you own, like bumper stickers signaling to the world you are in one group and not another. Those things then influence you to become the sort of person who owns them.

What Does this Have to Do With Code Comments?

I’m glad you asked. The answer is that for me, and I suspect for a lot of you, putting comment headers above all methods, classes, fields, etc is a doubling down on a behavior that probably makes no sense, a staying of the course, and a thing that we like doing because we did it.

When I was in college, I don’t think I ever commented my code. At that time, as I recall, we were young and naive and wore code unreadability on our sleeves as a badge of honor; anyone who could execute an entire complex looping sequence in the condition and increment statements of a for loop was a total ninja, after all. When I left school and went to the professional world, I was confronted with people who wrote various kinds of code, but one was a Linux kernel hacker that I truly respected and I noticed that he dutifully put comment blocks above all of his methods. Included in these were descriptions of what the method did, what it returned, what the arguments were, what invariants it preserved, and a journal log of edits made to it.

This was an epiphany for me. All of those guys with their non-commented, zany for loops were amateurs. Real programmers did that kind of ninja stuff and then put a nice, clear comment on top that explained to mere mortals how awesome they were. I was hooked. I wanted to be that guy and so I became a guy that wrote those kind of comments above methods and everywhere else. I didn’t do this halfway — I was the kind of guy that went the extra mile to thoroughly document everything.

Years, jobs and programming languages later, this practice persisted, long outlasting my opinion that it was a badge of honor to cram multiple execution statements into loop iterators and furrow people’s brows with inscrutable pointer arithmetic (having gravitated toward code that was clear, readable and maintainable). A funny thing started happening to me, though. I started reading blog posts like this one, where people said things like:

What makes comments so smelly in general? In some ways, they represent a violation of the DRY (Don’t Repeat Yourself) principle, espoused by the Pragmatic Programmers. You’ve written the code, now you have to write about the code.

First, where are comments indeed useful (and less smelly)? If you are writing an API, you need some level of generated documentation so that people can use your API. JavaDoc style comments do this job well because they are generated from code and have a fighting chance of staying in sync with the actual code. However, tests make much better documentation than comments. Comments always lie (maybe not now, but on a long enough timeline, all comments will become outdated). Tests can’t lie or they fail. When I’m looking at work in progress on projects, I always go to the tests first. The comments may or may not be there, but the tests define what’s now done and working.

The first few times I read or heard arguments like this, I dismissed them as the work of people too lazy to write comments. As I started reading more such posts… I simply stopped thinking about them. It was a silly argument — after all, I had spent countless hours throughout my career writing comments and I wouldn’t spend all that time doing something that might not be a good idea. A friend of mine likes to satirize this type of reasoning by saying “if the facts don’t fit the dogma, the facts must be discarded,” and while the validity of comments in code is truly a matter of opinion rather than fact, the same logic applies to my reasoning. I discarded an argument because not discarding it would have made me feel bad about prior decisions that I had made.

Nagging Doubts and Real Conclusions

In spite of my best efforts simply to ignore arguments that I didn’t like, I was plagued by cognitive dissonance on the matter. Why would otherwise rational and obviously quite intelligent people say such ridiculous things about my beloved exhaustive commenting? Determined to reconcile these divergent beliefs, I decided one day while documenting code smells on a wiki for a former employer that they didn’t really mean that comments were a code smell in general – they must only be talking about stupid comments like putting “//this is a for loop” above a for loop. Yeah, of course. That’s what they meant.

But, it wasn’t. In spite of myself, I had read and processed the arguments. I knew that comments weren’t DRY and that they represented duplication of the logic of the code. I knew that heavily commented code tended to mask poor naming choices and unintuitive abstractions. I knew that comments, even XML/Javadoc comments for public-facing APIs tended to start out helpful and accurate and then to rot over time as different people took over maintenance and didn’t think to change them along with the code. Heck, I’d experienced that one firsthand with inaccuracies and inconsistencies piling up as surely as they do in a non-normalized database.

So, eventually I was forced to see the light and admit that it was possible and even probable that my efforts had been a complete waste of time over the years. Sound harsh? Well, here’s the thing. How many of those comments that I wrote are still floating around in some code base somewhere, not deleted? And of that percentage, how many are accurate? I’m guessing the percentage is tiny — the comments are dust in the wind. And before they blew away, what are the odds that anyone cared enough to read them? What are the odds that anyone cares that “ebd changed a char* to a int* on 4/15/04”?

So, I’ve stopped viewing comments as obligatory. I write them now with a sort of “when in Rome” sentiment if others do it on the code I’m working on. After all, I have plenty of practice with it. But my real preference, in light of my focus on abstractions, is now to have the attitude that I am forbidden from writing comments. If I want to make my code usable and understandable, I have to do it with tight abstractions, self-documenting code and making bad decisions impossible. Is that a lofty goal? Sure. But I think it’s a good one. I’ve gone from viewing comments as obligatory to viewing them as indicative of small design failures, and I’m content with that. I think that all along part of me viewed commenting my methods as redundant and I’ve finally rid myself of the cognitive dissonance.

By

Why the Statement “I Don’t Source Control My Unit Tests” Makes Me Happy

An Extraordinary Claim

I was talking to a friend the other day, and he relayed a conversation that he’d had with a co-worker. They were discussing writing unit tests and the co-worker claimed that he writes unit tests regularly, but discards them and never checks them into source control. I was incredulous at this, so I asked what the rationale was, to which my friend cited his co-worker’s claims (paraphrased) that “[unit tests] are fragile, they take too long to maintain, they prohibit adapting to new requirements, errors are caused by user change requests not incorrect code, and they’re hard to understand.” (emphasis mine, and I’ll return to this later).

This struck me as so bizarre — so preposterous — that I immediately assumed I was out of my league and had missed an industry sea-change. I remember a blog post by Scott Hanselman entitled “I’m A Phony. Are You?” and I felt exactly what he meant. Clearly, I’d been exposed. I was not only wrong about unit tests, but I was 180 degrees, polar opposite, comically wrong. I had wasted all of this time and effort with TDD, unit test maintenance, unit tests in the build, etc when what I really should have been doing was writing these things once and then tossing them since their benefit is outweighed by their cost.

Given that I don’t have direct access to this co-worker, I took to the internet to see if I could locate the landmark post or paper that had turned the tide on this subject and no doubt a whole host of other posts and papers following suit. So, I googled “Don’t Check In Unit Tests”. Your mileage may vary because I was logged into gmail, but I saw old posts by Steve Sanderson and Jeff Atwood encouraging people to write tests, a few snarky posts about not testing, and other sorts of things you might expect. I had missed it. I tried “Don’t version control unit tests” and saw similar results. A few variants on the same and similar results too. Apparently, most of the internet was still thinking inside of the box that the co-worker had escaped with his bold new vision.

Finally, I googled “Should I source control unit tests?” and found one link from programmer’s stack exchange and another from stackoverflow. In a quick and admittedly very rough count of people who had answered and/or voted one way or the other, the vote appeared to be about 200 to 0 for source controlling versus leaving out, respectively. The answer that most succinctly seemed to summarize the unanimous consensus was Greg Whitfield’s, which started with:

Indeed yes. How could anyone ever think otherwise?

Hmmm…

There’s Really No Argument

With my confidence somewhat restored, I started thinking of all the reasons that source controlling unit tests makes sense:

  1. Provides living ‘documentation’ of intentions of class author.
  2. Prevents inadvertent regressions during changes.
  3. Allows fearless refactoring of your code and anyone else’s.
  4. Incorporation into the build for metric and verification purposes.
  5. Testing state at the time of tagged/branched versions can be re-created.
  6. (As with source controlling anything) Hard drive or other hardware failure does not result in lost work.

I could probably go on, but I’m not going to bother. Why? Because I don’t think that the rationale are actually rationale for not checking in unit tests. I think it’s more likely that this person who “writes tests but doesn’t actually check them in” might have had a Canadian girlfriend as a teenager. In other words, I sort of suspect that these “disposable” unit tests, whose existence can neither be confirmed nor denied, may not necessarily actually exist and so the rationale for not checking them in becomes, well, irrelevant.

And, look at the rationale. The first three clauses (fragile, too long to maintain, prohibit new work) seem to be true only for someone who writes really bad unit tests (i.e. someone without much practice who may, I dunno, be discouraged during early attempts). Because while it’s true that unit tests, like any other code, constitute a maintenance liability, they need not be even remotely prohibitive, especially in a nicely decoupled code base. The fourth clause (errors are everyone’s fault but the programmer) is prima facie absurd, and the fifth and most interesting clause seems to be an unwitting admission of the problem – a difficulty understanding how to unit test. This is probably the most important reason for “not checking in” or, more commonly and accurately, not writing unit tests at all — they’re hard when you don’t know how to write them and haven’t practiced it.

So Why the Happy?

You’re probably now wondering about the post title and why any of this would make me happy. It’s simply, really. I’m happy that people feel the need to make excuses (or phantom unit tests) for why they don’t automate verification of their work. This indicates real progress toward what Uncle Bob Martin describes in a talk that he does called “Demanding Software Professionalism” and alludes to in this post. In his talk, Bob suggests that software development is a nascent field compared to others like medicine and accounting and that we’re only just starting to define what it means to be a software professional. He proposes automated verification in the form of TDD as the equivalent of hand-washing or double-entry bookkeeping respectively in these fields and thinks that someday we’ll look back on not practicing TDD and clean coding the way we now look back on surgeons that refused to wash their hands prior to surgery.

But having a good idea and even demonstrating that it works isn’t enough. Just ask Ignaz Semmelweis who initially discovered, and empirically demonstrated that hand-washing reduced surgery mortality rates, only to be ridiculed and dismissed by his peers in the face of cold-hard evidence. It wasn’t until later, after Semmelweis had been committed to an insane asylum and died that his observations and hypothesis got more backers (Lister, Pasteur, et al) and a better marketing campaign (an actual theoretical framework of explanation called “Germ Theory”). In Semmelweis’s time, a surgeon could just call him a crank and refuse to wash his hands before surgery. Decades later, he would have to say, “dude, no, I totally already washed them when you weren’t looking” if he was feeling lazy. You can even ask his Canadian girlfriend.

At the end of the day, I’m happy because the marketing for TDD and clean coding practice must be gaining traction and acceptance if people feel as though they have to make excuses for doing it. I try to be a glass half full kind of guy, and I think that’s the glass half full outlook. I mean one person not wanting to automate tests doesn’t really matter in the scheme of things at the moment since there is no shortage of people who also don’t want to, but it’s good to see people making excuses/stories for not wanting to rather than just saying “pff, waste of time.”

(And, for what it’s worth, I do acknowledge the remote possibility that someone actually does write and discard unit tests on a regular and rigorous basis. I just don’t think it’s particularly likely.)

By

Building Weird Light Switches – Out and Ref

Gut Reaction

I was talking with someone about possible approaches to an API the other day. He asked me if I’d favor a method that took a parameter and had a return value or if I’d prefer a method that was void and took two parameters but with one as a ref parameter. My immediate, knee-jerk response was “I don’t like ref parameters one bit, so I’d prefer the former.” He looked at me with a bit of surprise and then kind of shrugged as if to say, “whatever, weirdo” and went with former. To him it was six or half a dozen.

This made me wonder whether I’m being dogmatic and rigid in the way I approach software, so I spent some time thinking and reading about ref parameters and their cousin, the out parameter. For those of you not acquainted with C#, both of these are ways of passing parameters into methods with the main difference being whether the called method must change the value passed in (out) or whether it can optionally do so (ref). Another way of thinking of this is that you would use out when the initial value of the parameter does not matter and ref when it does.

Here is what the syntax looks like:

int myValue;
//This wouldn't compile, so don't do it: AddOneToValue(ref myValue);

SetValueToTwelve(out myValue);
Console.WriteLine(myValue);

AddOneToValue(ref myValue);
Console.WriteLine(myValue);

In this case, you will see printed 12 and then 13 on the next line, assuming the methods do what they say they will. (This gets even screwier if you’re a Java programmer, in which case you need to create a wrapper class or use int[] or something since ints are immutable and parameters are always passed by value even if objects are accessed on the heap by reference).

What Does the Internet Think?

Usually when I react strongly and then want to justify my reaction, I go see what others have to say, preferably those I respect. The estimable Jon Skeet has this to say:

Basically out parameters are usually a sign that you want to effectively return two results from a method. That’s usually a code smell – but there are some cases (most notably with the TryXXX pattern) where you genuinely want to return two pieces of information for good reasons and it doesn’t make much sense to encapsulate them together.

In other words, avoid out/ref where you can do so easily, but don’t go massively out of your way to avoid them.

In the answer below his, Ryan Lanciaux raises an interesting point when he says that “[out/ref parameters] basically add side-effects to your code and could be a nightmare when it comes to debugging.”

So, two take-aways here are that having a method return two distinct results is a code smell and that method side effects tend to be a problem. On the flip-side of the argument mainly seems to be somewhat of a pragmatic, duct-tape programming kind of argument that sometimes the purist approach just isn’t worth the effort and potential awkwardness. The iconic example of using out parameters is the T.TryParse(string, out t) methods in C# (which I really don’t like, but I’m trying to be suspend my bias for the sake of investigation).

Next up, here’s what, well, MSDN has to say in explanation of a static analysis design warning that they raise entitled “Avoid out parameters”:

Passing types by reference (using out or ref) requires experience with pointers, understanding how value types and reference types differ, and handling methods with multiple return values. Also, the difference between out and ref parameters is not widely understood.

Although return values are commonplace and heavily used, the correct application of out and ref parameters requires intermediate design and coding skills. Library architects who design for a general audience should not expect users to master working with out or ref parameters.

There’s a certain irony to this, but I definitely understand the point. The irony is that the same outfit that put these features into the language raises a warning telling you not to use them. Don’t get me wrong — I understand that this is akin to making computers that can be taken apart while warning users not to do so, since many of them aren’t really qualified — but the irony amuses me nonetheless. It’s also interesting that MSDN seems to think that pointers and reference vs value are “intermediate” language features. Perhaps the fact that I cut my teeth on C and C++ as a wee programmer is showing, but… seriously? Intermediate?

At any rate, the consensus on the subject that I’ve seen at these and a variety of other blogs and stack overflow posts seems to be that out/ref parameters are generally to be avoided… except when they’re sort of unavoidable, either because of interop concerns or because you really want (need?) a function that returns two or more things.

Do One Thing

But isn’t a function that does two things a violation of the Single Responsibility Principle of SOLID fame, applied at the method level? And aren’t out/ref parameters, pretty much by definition, side effects that constitute violations of Command-Query Separation, a paradigm in which methods that mutate state (commands) are separated from methods that retrieve information (queries) and ne’er the twain shall meet? I mean, any method that ‘returns’ two values is, well, doing at least two things and any method that mutates object state and kicks back a ref/out parameter is serving as command and query.

But what about methods like the obtuse ones above, SetValueToTwelve() and AddOneToValue()? Those are void methods that mutate only the out/ref parameter. They could be made static and rewritten as int Return12() and int AddOneToValue(int value) without altering their purpose or effect. So, they’re not really violating SRP or CQS, right? They’re just slightly more awkward versions of familiar APIs.

But that really hits home with me. Why do I want something that’s either slightly more awkward or a violation of some very helpful clean coding practices? I mean, we’re not really shooting for the moon there are we, if something is at best somewhat awkward and at worst a code smell. Why do it at all? Methods should pick one thing and do it (or return it) and should do it as non-awkwardly as possible.

What If Light Switches Worked This Way?

I like to think of our task as programmers in terms of abstractions (in case you hadn’t caught my series on abstractions, feel free to click that tag and see me harp on it in a lot of posts). One easy abstraction for me to relate to the world of programming is turning a light switch on and off. This is a classic case of a class Light that exposes a command SetOnValue(bool) and exposes a readonly property, bool IsOn. So, as in the real world, I move the switch up or down and then separately observe the results. (Let’s ignore for argument sake that it might be better to model “Switch” and “Light” as separate entities).

This is a great example of Command-Query Separation. Toggling the light on or off is a command, and looking at the light to see whether or not it’s on is a query. But, let’s blur the lines for a moment and rewrite this so that there is no readonly “IsOn” property. Instead, SetOnValue(value) will return a boolean indicating whether the light is on or not. So now, we have a switch that also acts as the thing that tells us whether or not it’s on — our wall switch also just became a light. Now, when you toggle the switch, the switch itself glows to give you feedback. Weird.

But, it gets weirder. Instead of having our SetOnValue() function return a bool, let’s feed it a ref parameter. On the way in, we’ll indicate the value we want, and on the way out, it will indicate the value that we’re going to get. In terms of modeling the real world, ref parameters are kind of mind-blowing. You hand some external thing a piece of yourself and it alters that for you. So now, we have a light switch that I flip on with my hand, and indicates the success of that operation by modifying my hand – let’s say burning it. So, I flip the switch on, and if it works, the blazing hot bulb in the switch burns my hand (but gives off no light, since the burn is how I know it’s working). So, there you have it – a strange path from a light switch that turns on lights to one that simply injures me.

I realize that this metaphor is a touch strained, but here’s the thing: ref and out parameters are weird and counter-intuitive. It’s hard for them not to strain a metaphor that they’re involved in. Anything I’m handed in life could be represented conceptually as thing or a collection of things. Any action I take in life could be represented as a void method with 0 or more parameters. But what is a ref paramter? Where in life do I take something, set it to a way I want it, give it to something, and then have it given back to me differently? Maybe as part of an assembly line or in some weird, Rube-Goldberg kind of process, but this is hardly how normal interactions are modeled.

Ref and out are leaky abstractions in terms of code readability. They reek of code. Code involving these things doesn’t read like simple, well written prose or a nice story — it reads like a procedural construct such as the withholding worksheet for your paycheck deductions. So, like dealing with the IRS, why don’t we avoid that if we can? And, I think you’d be pretty hard-pressed to argue that you can’t.