DaedTech

Stories about Software

By

What To Return: IEnumerable or IList?

I’ve received a couple of requests in various media to talk about this subject, with the general theme being “I want to return a bunch of things, so what type of bunch should I use?” I’m using the term “bunch” in sort of a folksy, tongue-in-cheek way, but also for a reason relating to precision — I can’t call it a list, collection or group without evoking specific connotations of what I’d be returning in the C# world (as those things are all type names or closely describe typenames).

So, I’m using “bunch” to indicate that you want to return a “possibly-more-than-one.”

I suspect that the impetus for this question arises from something like a curt code review or offhand comment from some developer along the lines of “you should never return a list when you could return an IEnumerable.” The advice lacks nuance for whatever reason and, really, life is full of nuance.

So when and where should you use what? Well, the stock consultant answer of “it depends” makes a good bit of sense. You’ll also probably get all kinds of different advice from different people, but I’ll describe how I decide and explain my reasoning.

First Of All, What Are These Things?

Before we go any further, it probably makes sense to describe quickly what each of these possible return values is.

IList is probably simpler to describe. It’s a collection (I can use this because it inherits from ICollection) of objects that can be accessed via indexers, iterated over and (usually) rearranged. Some implementations of IList are readonly, others are fixed size, and others are variable size. The most common implementation, List, is basically a dynamic array for the sake of quick, easy understanding.

I’ve blogged about IEnumerable in the past and talked about how this is really a unique concept. Tl;dr version is that IEnumerable is not actually a collection at all (and it does not inherit from ICollection), but rather a combination of an algorithm and a promise.

If I return an IEnumerable to you, what I’m really saying is “here’s something that when you ask it for the next element, it will figure out how to get it and then give you the element until you stop asking or there are none left.” In a lot of cases, something with return type IEnumerable will just be a list under the hood, in which case the “strategy” is just to give you the next thing in the list.

But in some cases, the IEnumerable will be some kind of lazy loading scheme where each iteration calls a web service, hits a database, or for some reason invokes a 45 second Thread.Sleep. IList is (probably) a data structure; IEnumerable is a algorithm.

Since they’re different, there are cases when one or the other clearly makes sense.

When You’d Clearly Use IEnumerable

Given what I’ve said, IEnumerable (or perhaps IQueryable) is going to be your choice when you want deferred execution (you could theoretically implement IList in a way that provided deferred execution, but in my experience, this would violate the “principle of least surprise” for people working with your code and would be ill-suited since you have to implement the “Count” property).

If you’re using Entity Framework or some other database loading scheme, and you want to leave it up the code calling yours when the query gets executed, return IEnumerable. In this fashion, when a client calls the method you’re writing, you can return IEnumerable, build them a query (say with Linq), and say “here, you can have this immediately with incredible performance, and it’s up to you when you actually want to execute this thing and start hammering away at the database with retrieval tasks that may take milliseconds or seconds.”

Another time that you would clearly want IEnumerable is when you want to tell clients of your method, “hey, this is not a data structure you can modify — you can only peek at what’s there. If you want your own thing to modify, make your own by slapping what we give you in a list.”

To be less colloquial, you can return IEnumerable when you want to make it clear to consumers of your method that they cannot modify the original source of information. It’s important to understand that if you’re going to advertise this, you should probably exercise care in how the thing you’re returning will behave. What I mean is, don’t return IEnumerable and then give your clients something where they can modify the internal aggregation of the data (meaning, if you return IEnumerable don’t let them reorder their copy of it and have that action also reorder it in the place you’re storing it).

When you’d clearly use IList

By contrast, there are times when IList makes sense, and those are probably easier to understand.

If, for instance, your clients want a concrete, tangible, and (generally) modifiable list of items, IList makes sense.

  • If you want to return something with an ordering that matters and give them the ability to change that ordering, then give them a list.
  • If they want to be able to walk the items from front to back and back to front, give them a list.
  • Or, if they want to be able to look up items by their position, give them a list.
  • And if they want to be able to add or remove items, give them a list. Any random accesses and you want to provide a list.

Clearly, it’s a data structure you can wrap your head around easily — certainly more so than IEnumerable.

Good Polymorphic Practice

With the low hanging fruit out of the way, let’s dive into grayer areas. A rule of thumb that has served me well in OOP is “accept as generic as possible, return as specific as possible.” This is being as cooperative with client code as possible.

Imagine if I write a method called “ScareBurglar()” that takes an Animal as argument and invokes the Animal’s “MakeNoise()” method. Now, imagine that instead of taking Animal as the parameter, ScareBurglar took Dog and invoked Dog.MakeNoise(). That works, I suppose, but what if I had a guard-bear? I think the bear could make some pretty scary noises, but I’ve pigeon-holed my clients by being too specific in what I accept.

If MakeNoise() is a method on the base class, accept the base class so you can serve as many clients as possible.

On the flip side, it’s good to return very specific types for similar kinds of reasoning. If I have a “GetDog()” method that instantiates and returns a Dog, why pretend that it’s a general Animal? I mean, it’s always going to be a Dog anyway, so why force my clients that are interested in Dog to take an Animal and cast it?

I’ve blogged previously about what I think of casting. Be specific. If your clients want it to be an animal, they can just declare the variable to which they’re assigning the return value as Animal.

So, with this rule of thumb in mind, it would suggest that returning lists is a good idea when you’re definitely going to return a list. If your implementation instantiates a list and returns that list, with no possibility of it being anything else, then you might want to return a list. Well, unless…

Understanding the Significance of Interfaces

A counter-consideration here is “am I programming to an interface or in a simple concrete type.” Why does this matter?

Well, it can push back on what I mentioned in the last section. If I’m programming a class called “RandomNumberProvider” with a method “GetMeABunchOfNumbers()” that creates a list, adds a bunch of random numbers to it, and returns that list, then I should probably return List<int>.

But what if I’m designing an interface called IProvideNumbers? Now there is no concrete implementation — no knowledge that what I’m returning is going to be implemented as List everywhere. I’m defining an abstraction, so perhaps I want to leave my options open. Sure RandomNumberProvider that implements the interface only uses a list. But how do I know I won’t later want a second implementation called “DeferredExecutionNumberProvider” that only pops numbers as they’re iterated by clients?

As a TDD practitioner, I find myself programming to interfaces. A lot. And so, I often find myself thinking, what are the postconditions and abilities I want to guarantee to clients across the board?

This isn’t necessarily, itself, a by-product of TDD, but of programming to interfaces. And, with programming to interfaces, specifics can bite you at times. Interfaces are meant to allow flexibility and future-proofing, so getting really detailed in what you supply can tie your hands. If I promise only an IEnumerable, I can later define implementers that do all sorts of interesting things, but if I promise an IList, a lot of that flexibility (such as deferred execution schemes) go out the window.

The Client’s Burden

An interesting way to evaluate some of these tradeoffs is to contemplate what your client’s pain points might be if we guess wrong.

Let’s say we go with IEnumerable as a return type but the client really just wants a IList (or even just List). How bad is the client’s burden? Well, if client only wants to access the objects, it can just awkwardly append .ToList() to the end of each call to the method and have exactly what it wants. If the client wants to modify the state of the grouping (e.g. put the items in a different order and have you cooperate), it’s pretty hosed and can’t really use your services. However, that latter case is addressed by my “when a list is a no brainer” section — if your clients want to do that, you need to not give them an IEnumerable.

What about the flip side? If the client really wants an IEnumerable and you give them a list? Most likely they want IEnumerable for deferred execution purposes, and you will fail at that. There may be other reasons I’m not thinking of off the top, but it seems that erring when client wants an enumerable is kind of a deal-breaker for your code being useful.

Ugh, so what should I do?!?

Clear as mud?

Well, problem is, it’s a complicated subject and I can only offer you my opinion by way of heuristics (unless you want to send me code or gists, and then I can offer concrete opinions and I’m actually happy to do that).

At the broadest level, you should ask yourself what your client is going to be doing with the thing that you return and try to accommodate that. At the next broadest level, you should think to yourself, “do I want to provide the client a feature-rich experience at the cost of later flexibility or do I want to provide the client a more sparse set of behavior guarantees so that I can control more implementation details?”

It also pays to think of the things you’re returning in terms of what they should do (or have done to them), rather than what they are. This is the line of thinking that gets you to ask questions like “will clients need to perform random accesses or sorts,” but it lets you go beyond simple heuristics when engaged in design and really get to the heart of things. Think of what needs to be done, and then go looking for the data type that represents the smallest superset of those things (or, write your own, if nothing seems to fit).

I’ll leave off with what I’ve noticed myself doing in my own code. More often than not, when I’m communicating between application layers I tend to use a lot of interfaces and deal a lot in IEnumerable. When I’m implementing code within a layer, particularly the GUI/presentation layer in which ordering is often important, I favor collections and lists. This is especially true if there is no interface seem between the collaborating components. In these scenarios I’m more inclined to follow the “return the most specific thing possible” heuristic rather than the “be flexible in an interface” heuristic.

Another thing that I do is try to minimize the amount of collections that I pass around an application. The most common use case for passing around bunches of things is collections of data transfer objects, such as some method like “GetCustomersWithFirstName(string firstName).” Clearly that’s going to return a bunch of things. But in other places, I try to make aggregation an internal implementation detail to a class. Command-Query Separation helps with this. If I can, I don’t ask you for a collection, do things to it and hand it back. Instead I say “do this to your collection.”

And finally, when in doubt and all else seems to be a toss-up, I tend to favor promising the least (thus favoring future flexibility). So if I really can’t make a compelling case one way or the other for any reason, I’ll just say “you’re getting an IEnumerable because that makes maintenance programming likely to be less painful later.”

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

I Give Up: Extroverted Barbarians at the Gates

Someone sent me a link to the video shown after this paragraph the other day, and I watched it. I then tweeted the link and sent it to a few of my coworkers because I figured it would make people laugh. It’s really funny, so give it a watch. But weirdly, I didn’t laugh. I watched it over and over again, mesmerized. I recognize that it’s funny and I find it funny, but I didn’t laugh.

This video is really a work of genius because it captures some incredible subtleties. There are two common archetypes captured nicely here in the form of the protagonist’s supposed allies: his boss and the project manager. I’ll give them names in their own sections below, along with the client characters. And then there are conversational tactics that bear mentioning.

This all revolves around a protagonist with whom any introverted person can identify. There’s nothing to indicate, per se, whether he’s introverted or extroverted, but the precision, the mannerisms, the posture — all of these scream “programmer” (or at least “engineer”) and so goes the association with introversion. The protagonist is the sole bulwark of sanity against a flood of idiocy, misunderstanding and general incompetence. You probably relate to him, having attended a meeting where all of the gathered C-levels and analysts thought you were being an obstructionist malingerer because you wouldn’t install Angry Birds on the meeting room’s television.

So who are the players?

Chamberlain

In a way, I liken the smarmy project manager, Walter, to former British prime minister, Neville Chamberlain, most remembered for his foreign policy of appeasement leading up to World War II in which he sought to dampen the aggression coming from the Axis powers by essentially “befriending” them. In this particular video, Chamberlain, the project manager, is presumably along to bridge the gap between the non-subject-matter-expert customers and the total-subject-matter-expert protagonist (and whose expertise makes the video eponymous). That’s not really why he’s there (though he doesn’t realize this), and I’ll get into that later as I’m describing tactics.

Chamberlain perceives that his best interests are served by simply agreeing to whatever is happening on the other side of the aisle, improvident though this may be. On some level, he’s probably aware that this strategy is stupid, but, hey, that’s a problem for later. He thinks his boss will skewer him if they don’t get the contract, so the fact that it’s going to be hard or impossible to deliver (what Expert is trying to tell him) just means he’ll later throw someone (i.e., Expert) under the bus.

Dilettante

The “design specialist,” Justine, is a mildly interesting character. She generally looks at Expert with some degree of respect and looks slightly uncomfortable when the rest of the characters make fun of Expert. At one point, to Expert’s delight, she even understands his point, and she visits him after the meeting out of genuine interest in the project and what is probably a “one pro to another” kind of overture. She’s the only character in the room that sees any value in Expert, and she probably recognizes that his subject matter knowledge exceeds hers and has value. If it were just her and Expert, she would probably listen attentively. I call her Dilettante because she seems to be the type of person you encounter with a bit of knowledge in a variety of fields and a genuine interest in improving.

Buffoon

The client’s boss is a classic MacLeod Clueless, and a simple idiot that isn’t very interesting. She’s the classic archetype of an over-promoted middle manager whose value is clearly wrapped up in company tenure. She spouts nonsensical jargon, torpedoes her firm’s own interests throughout the meeting, and serves up her position and her firm’s money for easy pickings by any MacLeod sociopath that happens along. She’s demanding something that she doesn’t understand in a way that makes no sense, and she’s willing to pay any huckster that comes along and sells her the magic beans she’s seeking.

Buffoon

Sociopath

Big Boss Man, to whom Chamberlain reports, is a classic MacLeod Sociopath. He likely has a fairly good handle on the situation and is of the opinion that the clients are idiots, but he has an intuitive understanding of the politics of the situation. Expert is flummoxed by the stupidity of the client proposal, and Chamberlain is simpering in an effort to show his boss his value as a diplomat, believing that the customer is always right and believing that Sociopath also believes that. Sociopath doesn’t. He knows the clients are idiots, and that Chamberlain is also kind of an idiot (for evidence of this, look at his expression at 6:14 where he clearly thinks the discussion of cats and birds as lines is dumb and simply ignores the client).

This doesn’t result in him rushing into defend Expert, however. That’s counter to his best interests, which I’ll address as a tactic, but he also finds Expert somewhat distasteful. Sociopath has navigated his way ably to money and power and a position atop the corporate hierarchy, but it is probably a slight annoyance to him that he may not be the smartest guy in the room. He knows that in Expert’s area of expertise, he’s nowhere near Expert, and while that’s fine, his inability to compare their relative intellectual worth across subject areas is a source of irritation.

Tactical Gamesmanship

So, the ostensible point of this meeting and no doubt many in which you’ve sat is to define the parameters for a project and then successfully launch that project. But, if you were to read the subconscious goals of the players, they would go something like this:

  • Chamberlain: I want to get the client to sign off no matter what, and I want Sociopath to think it was my heroics that made this happen.
  • Buffoon: I want to order people around and show off.
  • Sociopath: I want this to be over quickly so I don’t have to listen to Buffoon and Chamberlain.
  • Dilettante: I want to learn on the job without it being apparent that I’m doing so.
  • Expert: I want to define parameters for this project and successfully launch it.

Sociopath knows that getting Buffoon to agree to the project is a veritable certainty going into the meeting, and he knows that Chamberlain’s presence is valuable, but not for the reasons that Chamberlain thinks. Chamberlain thinks he’s there because he’s a “straight shooter/smooth talker” that “speaks Expert” but Sociopath just wants him there because he understands how to butter Buffoon’s bread — by causing Buffoon to think she’s won an exchange and humbled an Expert. He’s there because Sociopath knows he’ll team up with Buffoon to laugh at Expert. Dilettante is just window dressing.

So what are the tactics by which this happens? What makes this so cathartic for engineers and programmers to watch? Well, there are a number of things occurring here.

Seizing on the only part of an explanation you understand

There’s nothing to level the playing field quite like ignoring most of what someone talking to you says and seizing on some minor point. This has two advantageous for purveyors of rhetorical fallacy. First and foremost, it lets you pivot the discussion in a way that you find favorable, but secondly, it implies that your opponent has babbled on and on and over-complicated the matter (ala Reagan countering Carter — folksy and relatable countering egg-head). Near the beginning, Expert gives a detailed explanation, avoiding saying that it would be impossible to draw red line with green ink by talking about color blindness. It’s a long-winded, but technically accurate way of saying “that’s pretty much impossible,” and all Buffoon takes away from it is “so, in principle this is possible.”

Talking down to the expert because you don’t understand

When Expert asks Buffoon to clarify what she’s talking about with “transparent ink,” she patronizingly says she thought that he’d know what “transparent” means and that he’d better know what “red” means if he’s an Expert. A little later, she doesn’t understand what perpendicular means and when Expert accidentally exposes that, she blames him for not understanding her nonsense. It’s a relatively standard approach to strike first in blaming the other for a miscommunication between two parties, but it’s especially vitriolic in a case where the party in the driver’s seat is covering inadequacy.

Begging the question (and perverting the role of experts)

I’ve encountered this myself, in my travels, and it’s certainly on display here. People assume (from ignorance) that a certain outcome is possible/feasible, and then seek out an expert to make it happen. When the expert explains that they’re misguided or trying to do something ill-advised or impossible, they adopt the stance, “I thought you were an expert, and you’re telling me you can’t do this?” Chamberlain does this throughout the clip.

Dunning Kruger

This mostly comes from Sociopath and somewhat for show, but this is the tendency of those unskilled in a subject to assume that the subject is pretty simple and to generally devalue the knowledge of experts in that field. As more knowledge is acquired, so is respect for experts and humility. Sociopath dresses Expert down, particularly at the point where he says, “look, we’re not asking you to draw 20 lines — just 7.” Buffoon also does this once when she draws a triangle as an example of three perpendicular lines (“move — let me do it!”) Being the only Expert here and thoroughly outgunned and unaware of the real agenda, Expert is absolutely buried in an amplified echo chamber of Dunning Kruger.

Expert Introverts

These players and these tactics are painfully relatable. People in our line of work look at this ruefully and laugh because someone finally gets it and understands how silly the players seem to them. But introversion, lack of interest in office politics, and professional integrity are what hamstring us in such situations. I mean think of it this way — you cringe because you’re right there along with Expert, wanting these idiots to understand that red pens don’t draw green lines. You want to speak rationally to them and use analogies, diagrams and metaphors to make them see your point.

What you don’t do is turn the Dunning Kruger around on them and start telling them that they’re really going to need pure red lines if they want to maximize their verticals and strategize their synergies. You don’t tell them that kitten lines were so 2011. You don’t interrupt Chamberlain when he says “any fool can criticize” to say that you’re okay with the clients’ criticism and how he dare he call them fools. You don’t ask Chamberlain, if his title is “project manager,” why can’t he “manage” to define a clear spec.

You don’t do any of these things. Neither do I. Neither did Expert. Instead, we all do what he did in the end, which is to say, “sure, whatever buddy, I give up.” Extroverts extemporize and thrive in situations like this fueled with BS and beyond their control (though, Sociopath, who is controlling it, may be an introvert). We find ourselves at a loss for words, and utterly demoralized. Our credentials, our competence, and the validity of our very profession called into question, we bleakly resign ourselves to the madness and go home for a beer. We do that for a while, at least, and then, eventually, we Quit with a capital Q.

Perhaps that’s why I didn’t find myself laughing while watching this. Poor Anderson, the Expert, isn’t having an experience that he’ll submit to the Daily WTF and move on — he’s figuring out that his professional life is miserable. And the reason it’s miserable is because he’s realizing that expertise, ideas and results aren’t really the backbone of good business; in the land of the extroverts, egos and social capital are king.

By

Why I Don’t Inherit from Collection Types

In one of the recent posts in the TDD chess series, I was asked a question that made me sort of stop and muse a little after it was asked. It had to do with whether to define a custom collection type or simply use an out of the box one and my thinking went, “well, I’d use an out of the box one unless there were some compelling reason to build my own.” This got me to thinking about something that had gone the way of the Dodo in my coding, which was to build collection types by inheriting from existing collections or else implementing collection interfaces. I used to do that and now I don’t do that anymore. At some point, it simply stopped happening. But why?

I never read some blog post or book that convinced me this was stupid, nor do I have any recollection of getting burned by this practice. There was no great moment of falling out — just a long-ago cessation that wasn’t a conscious decision, sort of like the point in your 20’s when it stops being a given that you’re going to be out until all hours on a Saturday night. I pondered this here and there for a week or so, and then I figured it out.

TDD happened. Er, well, I started practicing TDD as a matter of course, and, in the time since I started doing that, I never inherited from a collection type. Interesting. So, what’s the relationship?

Well, simply put, the relationship is that inheriting from a collection type just never seems to be the simplest way to get some test passing, and it’s never a compelling refactoring (if I were defining some kind of API library for developers and a custom collection type were a reasonable end-product, I would write one, but not for my own development sake). Inheriting from a collection type is, in fact, a rather speculative piece of coding. You’re defining some type that you want to use and saying “you know, it’d probably be handy if I could also do X, Y, and Z to it later.” But I’ve written my opinion about this type of activity.

Doing this also largely increases the surface area of your class. If I have a type that encapsulates some collection of things and I want to give clients the ability to access these things by a property such as their “Id” then I define a GetById(int) method (or whatever). But if I then say “well, you know what, maybe they want to access these things by position or iterate over them or something, so I’ll just inherit from List of Whatever, and give them all that goodness. But yikes! Now I’m responsible for maintaining a concept of ordering, handling the case of out of bounds indexing, and all sorts of other unpleasantness. So, I can either punt and delegate to the methods on the type that I’m wrapping or else I can get rid of the encapsulation altogether and just cobble additional functionality onto List.

But that’s an icky choice. I’m either coughing up all sorts of extra surface area and delegating it to something I’m wrapping (ugly design) or I’m kind of weirdly violating the SRP by extending some collection type to do domain sorts of things. And I’m doing this counter to what my TDD approach would bubble to the surface anyway. I’d never write speculative code at all. Until some client of mine were using something besides GetById(int), GetById(int) is all it’s getting.

This has led me to wonder if, perhaps, there’s some relationship between the rise of TDD and decoupled design and the adage to favor composition over inheritance. Deep inheritance hierarchies across namespaces, assemblies, and even authors, creates the kind of indirection that makes intention muddy and testing (especially TDD) a chore. I want a class that encapsulates details and provides behavior because I’m hitting that public API with tests to define behavior. I don’t want some class that’s a List for the most part but with other stuff tacked on — I don’t want to unit test the List class from the library at all.

It’s interesting speculation, but at the end of the day, the reason I don’t inherit from collection types is a derivative of the fact that I think doing so is awkward and I’m leerier and leerier of inheritance as time goes on. I really don’t do it because I never find it helpful to do so, tautological as that may seem. Perhaps if you’re like me and you just stop thinking of it as a tool in your tool chest, you won’t miss it.

By

The Craftsmen Have it Right: Defining Software Quackery

I’ve fallen a bit behind on listening to some of my go-to programming podcasts of late (I will explain why shortly), but some of my friends were talking at dinner last week about a recent episode of .NET Rocks, featuring Alan Stevens. It presented the question, “Are you a craftsman?” and adopted a contrarian, or self-described “Devil’s Advocate” take on the movement. As far as episodes of that podcast go, this one was sort of guaranteed to be a lightning rod and the volume of comments seemed to bear that out, with Bob Martin himself weighing in on the subject.

I listened to this episode a week ago or so, and I’m not staring at the transcript, but here are some of the points I remember Alan making:

  • The craftsmanship movement in some places has turned into a monoculture — a way for the in-crowd to congratulate itself for getting it.
  • The wholesale comparison to guild culture and artisans is generally pretentious.
  • The movement should be inclusive and about building people up, rather than disparaging people.
  • There are lots of people out there not doing the “craftsmanship stuff” and still shipping working code, so who are we to judge?
  • Comparing software development to high-end, artistic craftsmanship, such as that of 30K guitars, devalues the latter**

(**I’ll just mention briefly that I consider this point to be absurd since this is shifting the goalposts completely from the “craft guild” comparison upon which the name is based — medieval craft guilds were responsible for making walls, shoes, and candles — not guitars for billionaire rock stars)

The comments for this episode contain a fair amount of disappointment and outright anger, but neither of those things was what I felt while listening, personally. Stevens is fairly engaging, and he had a number of pithy quotes from industry titans and authors at his disposal, citing, if memory serves, “The Pragmatic Programmer” and “The Mythical Man Month” as well as being fairly facile with the “Software Craftsmanship Manifesto.” This is clearly a well-read and knowledgeable industry professional as well as a practiced speaker, and he said a lot of things that I found reasonable, taken individually. But I couldn’t shake a vague sense of the surreal, like the dream sequences in the movie in Inception where Dali-like non-physics lurks at the periphery of your vision. It all seemed to make sense at first blush, but it was all wrong somehow.

I figured it out, and I’ll address that with a series of vignette sections here and hopefully tie them together somewhere this side of intolerable rambling.

On Quackery

One of the reasons I’ve been a bit negligent on my developer podcast listening is that I’ve become absolutely mesmerized by a podcast called “Quack Cast“, in which an infectious diseases doctor addresses alternative medicines. This man is ruthlessly empirical and relentlessly rational and even though the subject matter has virutally no overlap with anything I do, I find it fascinating. He provides ongoing rebuttal to all sorts of news about and advocacy for what he calls “SCAM” (supplements, complements, & alternative medicine), hoping to shine a light on how magic-based many of these approaches are.

Now, it isn’t my aim here to get into some kind of quasi-political fracas about whether prayer helps with surgery or whether aromatherapy cures cancer or whatever — if you have your beliefs, be welcome to them. But I will briefly describe one thing that’s so absolutely bat%&$# nuts that I don’t think it’s controversial for me to call it that: homeopathy. Now, if you’re like I was before hearing the podcast and then doing my own research, you probably don’t know exactly what homeopathy is. You probably think that it’s “home remedies” or “holistic treatment” or something else vague. It’s not. It’s much, much more ridiculous. It’s about 150 stops further down the crazy-train, firmly in the city of Quacksville.

You can read more here, but it’s a ‘theory’ of illness and cure that originated prior to the discovery of germs and the development of germ theory. Since that time, it has persisted largely and remarkably unchanged. The basic axioms are that “like cures like” and “the more you dilute something, the stronger it becomes.” So, for example, if you’re an end-stage alcoholic suffering from cirrhosis of the liver, the best medicine is tequila, but only if you cut it with water so many times that there’s no tequila left in your tequila. So, two wrongs (and one utter violation of the laws of physics as we know them) make not a right, but a nothing (i.e. giving water to someone with cirrhosis). And the fact that homeopathy is predicated upon diluting any actual effects out of existence is probably the reason it’s persisted while other historical, magical nonsense like alchemy, bloodletting, lobotomies, and exorcisms have gone extinct — homeopathy is relatively harmless because it’s all placebo, all the time (unless a practitioner makes a mistake and creates a nostrum that actually has some effect, which would be a problem). At the time this was proposed, it was as reasonable as any other contemporary approach to medicine, but in this day and age, it’s utter quackery.

Let’s come back to this later.

I’m OK, You’re OK

In software development, we start out as babes in the woods with an “I’m not OK, you’re OK” mentality. Everyone else knows what they’re doing and we’re lost. This is not healthy — we should be nurtured and not judged until we feel that we’re OK. Likewise, what separates good mentors from Expert Beginners is the adoption of an “I’m OK, you’re OK” attitude versus an “I’m OK, you’re not OK.” The latter is also not good. I write code and I’m OK. You write code and you’re OK. Maybe you’re more experienced than me, or maybe I’m more experienced than you. Either way, that’s OK. I’m OK and you’re OK.

At the end of the day, we’re all writing code that works, or at least trying to. And that’s OK. It’s OK if it doesn’t always work. We’re all trying. Maybe I test my code before checking it in. I’m OK. Maybe I at least compile it. And that’s OK. Maybe you don’t. And that’s OK. You’re OK. I’m OK and you’re OK. As long as we’re all trying, or, if not always trying, at least showing up, we’re all OK. There’s no sense casting aspersions or being critical.

This is the mature, healthy way to regard one another in the industry. Isn’t it? As long as we show up to work for the most part and sometimes write stuff that does stuff, who is anyone to judge? We’re all OK, and that’s OK.

Can Anyone Recommend an Apothecary?

If I’ve learned anything from reading all of the Game of Thrones books, it’s that the Middle Ages were awesome. Dragons and Whitewalkers and all that stuff. But what gets less play in those history textbooks is the rise of the merchant class during that time period. This is due in large part to the emergence of merchant and craft guilds. Craft guilds, specifically, were a fascinating and novel construct that greased the skids for the increased specialization of labor that would later explode in the Industrial Revolution (I think that happens in book 6 of Game of Thrones, but we’ll see when it comes out).

Craft guilds became professional homes for artisans of the time: stonemasons, cobblers, candle makers, bakers, and apothecaries — who knows, perhaps even homeopaths, though they would probably have been drawn and quartered for their suggested treatment of social diseases. The guilds had an arrangement with the towns in which they were situated. The only people in town that could practice the craft were guild members. In exchange, a basic level of quality was guaranteed. You could think of this as a largely benevolent cartel, though I kind of prefer to think of it as a labor union, but with more dragons.

Members of the guild entered as apprentices, where they learned at the feet of masters (and their family generally paid for this education). After completing a long apprenticeship, the aspiring guild member was allowed to practice the craft, for a wage as a journeyman, working with (for) other masters to ensure that knowledge cross-pollination occurred. With enough savings and proof of his “mastery” (which I think may be the origin of the word “masterpiece”) the journeyman could become a master and open his open business. The guild policed its own ranks for minimum quality, forcing its members to redo, pro bono, anything that they produced not up to snuff. The guild also acted as a unit against anyone selling knock-offs out of a trench-coat on the street corner. (The DVD makers’ guild was known in particular for this.) The self-policing guild offered a minimum standard for quality to the townsfolk and, in return, it was the only gig in town.

In practice, this meant a stamping out of impostors, charlatans, cheaters, and cranks. It meant that standards existed and that the general populace could depend, to some degree, on a predictable exchange of value. It also meant that there were accepted tools of the trade, processes and hoops through which to jump, internal political games to be played, and a decent amount of monoculture. And, it probably meant that the world progressed at the pace set by high achievers, if not geniuses. After all, the market economy had not yet been invented to reward wunderkinds, the outsized influencers, the lucky, and the radical innovators. Improvement happened, to be sure, but it happened at a pace approved by a council of masters that were ultimately men and men with egos.

So, this is probably the perfect metaphor for software development. Right? It seems like it’d be good to strive for a minimum level of quality — a set of standards, if you will. It seems like it’d be good if there was some kind of accreditation process, perhaps more accurate than a CS degree and less myopic than a certification, to demonstrate that someone was competent at software development. It seems like it’d be good for aspiring/new software developers to have a lot of one-on-one learning time with experienced developers that were really good at what they were doing. It seems like it’d be good for those initiates then to travel around some, broadening their experience and spreading their ideas. But, wait a second… we live in a very non-insular world, work in a global market economy, and, libertarians that we are, we’d sooner die than unionize. And writing software isn’t “craftsmanship,” the way that making candles, pies or shoes is.

Crap! We were doing so well there for a while, and there’s nothing more irritating than having to throw out your babies every time they dirty up their bath water. This seems to happen with all of my metaphors if I dissect them enough.

Software Quackery

Like medicine, artisanship, and even later 20th century pop psychology, software development has sort of lurched and hiccuped along in its (relatively short) history. Things that were widely done in the past have fallen out of favor. And, while a lot of our industry practices tend to be somewhat cyclical (preference for thin versus thick clients, for instance) some ideas do stick around. We make progress. We can stand, to some degree, on the shoulders of those who came before us and say that code at the GOTO level of abstraction does not scale and that shortening feedback loops and catching mistakes early are examples of preferable practices to their alternatives. We learn from our mistakes as individuals and as a collective. The bar inches higher. Or sometimes it stays stubbornly stagnant for a while and then makes a jump, the way that the field of medicine did with practices like hand-washing prior to surgery (except for homoepathic surgery to remove gangrenous limbs, in which case hand-washing is a strict no-no).

Movements emerge in fields, and they don’t emerge in a vacuum. They emerge in contrast to a prevailing trend or following a new, avant-garde idea. They are born out of words like “never again” as veterans of some practice look over the scars it has inflicted upon them. They become zeitgeists of the period before later being described as historic, or perhaps even quaint. In our field, I can think of some obvious examples. The Agile Manifesto and its subsequent movement springs to mind. As do the Software Craftsmanship, well, manifesto, and movement. Those are biggies, but there are plenty of others (everything on the desktop, no, wait, let’s move it to the browser, no, wait, let’s put it on people’s phones!)

So wherefore art thou, Software Craftsmanship? Well, I might suggest that Software Craftsmanship movement exists not in a vacuum and not to make those participating feel good about themselves after endless conferences, talks, and meetups filled with navel gazing. It’s not about condescension or creating “one true way.” It’s not about claiming that software is some kind of rarefied art form, nor is it about aesthetics or the “perfect code.” It’s not about saying that there’s one true MV* pattern to rule them all or that your unit tests need exactly one assert statement. Rather, at its core, I believe Software Craftsmanship is a simple refusal to tolerate Software Quackery any longer.

Once upon a time, back when PHP was new, we, as an industry, edited code on production servers, just as the founder of homeopathy thought that eating bark would cure malaria back in 18th century. However, a lot of time, study, experimentation and experience later, we came to the conclusion, as an industry, that hand editing code in production is a bad idea, just as medical science later came to understand why malaria occurred and how to prevent it. So now, it’s not pretentious nor is it beyond the pale for us, as an industry, to look upon software consultants editing code in production as quacks, anymore than it is to look upon medical practitioners handing out bark-water cocktails to treat malaria as quacks. We’re OK but you’re not OK if you’re doing that, and it’s OK for us to point out that you’re not OK.

It’s tempting to adopt a live and let live mentality and to be contrarian and say, “hey, we’re all trying to fight the disease, amirite, so let’s not be high and mighty,” but the fact of the matter is that treating malaria with bark and water is grossly negligent in this day and age, and it’s quackery of the highest order. So the Software Craftsmanship movement takes a page from the craft guilds of yore, and says, “you know, we should establish some minimum collective standards of competence, have some notion of internal quality policing, encourage aspiring members to learn at the hands of proven veterans, and develop the ability to say, ‘I’m sorry, but that’s just not good enough anymore.'” Yes, fine, if you ride the metaphor hard enough, it breaks down, but that’s really not the point. The point is that the movement is attempting to raise the bar from a world of barbaric medicine via unguents, spells, dances, and hitting people with rocks, to a world of medicine via the scientific method.

Back to the Podcast

As I mentioned earlier, I felt neither angry nor disappointed listening to Alan Stevens. I liked the conversation and at least parts of what he said. I picked up a few interesting quotes from authors and thinkers that I hadn’t hear previously, and his tone was relatively humble and disarmingly self-deprecating. And there was certainly a kind of populist, “don’t look down on the dark matter developers” vibe. So what was responsible for my surreal feeling? What was ‘wrong’ that I figured out? I could sum it up in a simple phrase, when it hit me: it felt like Stevens was pandering to what he thought was a low-brow audience.

What I mean is, here was a man, clearly knowledgeable about the industry, armed with impressive quotes and enough cachet to be asked to appear on .NET rocks, telling the listening population a very odd thing. Sure, he has a highfalutin Harvard doctorate like all those other doctors, but unlike them, he’s here to tell you that you don’t need any kind of fancy degree or medicine or machine to treat your own “carcinoma” or, as real people say, butt cancer — all you need is your own know-how, some kleenex, and a bucket of water. Or, in software terms, it’s okay if you ship crap, so long as you have a good attitude. I mean, we can’t all be expected to do a good job, can we? He even came out and said (paraprhased, though I remember the message very clearly) something along the lines of “sometimes mediocrity is good.” So, forget all of these fancy best practices and be proud of yourself if that 80,000 line classic ASP file you hand-edit in production manages not to kill anyone.

The underlying message of the show wasn’t any substantive indictment of the Software Craftsmanship movement, but an endorsement of Software Quackery. I mean, sure, there are good practices and bad practices, but let’s not get all high and mighty just because some doctors don’t wash their hands before operating on you — I mean, you’d rather have all of the colors of the competence rainbow than some surgeon hand-washing monoculture, right? The reference to Dreyfus really brought it full-weird-circle for me, though, because championing Software Quackery is generally the province of Expert Beginners — not Experts. So, I’m sorry, but I just don’t buy it. We can and should have some kind of minimum standard that isn’t a goody bag for just showing up. Please, join me in a polite refusal to tolerate Software Quackery — it just doesn’t cut it anymore.

By

TDD Chess Game Part 2

Alright, welcome to the inaugural video post from my blog. Due to several requests over twitter, comments, etc, I decided to use my Pluralsight recording setup to record myself actually coding for this series instead of just posting a lot of snippets of code. It was actually a good bit of fun and sort of surreal to do. I just coded for the series as I normally would, except that I turned the screen capture on while I was coding. A few days later, I watched the video and recorded narration for it as I watched it, which is why you’ll hear me sometimes struggling to recall exactly what I did.

The Pluraslight recordings are obviously a lot more polished — I tend to script those out to varying degrees and re-record until I have pretty good takes. This was a different animal; I just watched myself code and kind of narrated what I was doing, pauses, stupid mistakes, and all. My goal is that it will feel like we’re pair programming with me driving and explaining as I go: informal, conversational, etc.

Here’s what I accomplish in this particular video, not necessarily in order:

  • Eliminated magic numbers in Board class.
  • Got rid of x coordinate/y coordinate arguments in favor of an immutable type called BoardCoordinate.
  • Cleaned up a unit test with two asserts.
  • Got rid of the ‘cheating’ approach of returning a tuple of int, int.
  • Made the GetPossibleMoves method return an enumeration of moves instead of a single move.

And, here are some lessons to take away from this, both instructional from me and by watching me make mistakes:

  • Passing the same primitive/value types around your code everywhere (e.g. xCoordinate, yCoordinate) is a code smell called “primitive obsession” and it often indicates that you have something that should be a type in your domain. Don’t let this infect your code.
  • You can’t initialize properties in a non-default constructor (I looked up the whys and wherefores here after not remembering exactly why while recording audio and video).
  • Having lots of value type parameter and return values instead of domain concepts leads to constant small confusions that add up to lots of wasted time. Eliminate these as early in your design as possible to minimize this.
  • Sometimes you’ll create a failing test and then try something to make it pass that doesn’t work. This indicates that you’re not clear on what’s going on with the code, and it’s good that you’re following TDD so you catch your confusion as early as possible.
  • If you write some code to get a red test to pass, and it doesn’t work, and then you discover the problem was with your test rather than the production code, don’t leave the changes you made to the production code in there, even if the test is green. That code wasn’t necessary, and you should never have so much as a single line of code in your code base that you didn’t put in for reasons you can clearly explain. “Meh, it’s green, so whatever” is unacceptable. At every moment you should know exactly why your tests are red if they’re red, green if they’re green, or not compiling if the code doesn’t compile. If you’ve written code that you don’t understand, research it or delete it.
  • No matter how long you’ve been doing this, you’re still going to do dumb things. Accept it, and optimize your process to minimize the amount of wasted time your mistakes cause (TDD is an excellent way to do this).

So, here’s the video. Enjoy!

A couple of housekeeping notes. First, you should watch the video in full screen, 1080p, ideally (click the little “gear” icon between “CC” and “YouTube” at the bottom of the video screen . 720 will work but may be a touch blurry. Lower resolutions and you won’t see what’s going on. Second, if there’s interest, I can keep the source for this on github as I work on it. The videos will lag behind the source though (for instance, I’ve already done the coding for part 3 in the series — just not the audio or the post, yet). Drop me a comment or tweet at me or something if you’d like to see the code as it emerges also — adding it to github isn’t exactly difficult, but I won’t bother unless there’s interest.