DaedTech

Stories about Software

By

The Secret to Avoiding Paralysis by Analysis

A while ago Scott Hanselman wrote a post about “paralysis by analysis” which I found to be an interesting read.  His blog is a treasure trove of not only technical information but also posts that humanize the developer experience, such as one of my all time favorites.  In this particular post, he quoted a stack overflow user who said:

Lately, I’ve been noticing that the more experience I gain, the longer it takes me to complete projects, or certain tasks in a project. I’m not going senile yet. It’s just that I’ve seen so many different ways in which things can go wrong. And the potential pitfalls and gotchas that I know about and remember are just getting more and more.

Trivial example: it used to be just “okay, write a file here”. Now I’m worrying about permissions, locking, concurrency, atomic operations, indirection/frameworks, different file systems, number of files in a directory, predictable temp file names, the quality of randomness in my PRNG, power shortages in the middle of any operation, an understandable API for what I’m doing, proper documentation, etc etc etc.

Scott’s take on this is the following:

This really hit me because THIS IS ME. I was wondering recently if it was age-related, but I’m just not that old to be senile. It’s too much experience combined with overthinking. I have more experience than many, but clearly not enough to keep me from suffering from Analysis Paralysis.

(emphasis his)

Paralysis by Lofty Expectations

The thing that struck out to me most about this post was reading Scott say, “THIS IS ME.” When I read the post about being a phony and so many other posts of his, I thought to myself, “THIS IS ME.” In reading this one, however, I thought to myself, “wow, fortunately, that’s really not me, although it easily could be.” I’ll come back to that.

Scott goes on to say that he combats this tendency largely through pairing and essentially relying on others to keep him more grounded in the task at hand. He says that, ironically, he’s able to help others do the same. With multiple minds at work, they’re able to reassure one another that they might be gold plating and worrying about too much at once. It’s a sanity check of sorts. At the end of the post, he invites readers to comment about how they avoid Paralysis by Analysis.

For me to answer this, I’d like to take a dime store psychology stab at why people might feel this pressure as they move along in their careers in the first place — pressure to “[worry] about permissions, locking, concurrency, atomic operations, indirection/frameworks, different file systems, number of files in a directory, predictable temp file names, the quality of randomness in my PRNG, power shortages in the middle of any operation, an understandable API for what I’m doing, proper documentation, etc etc etc.” Why was it so simple when you started out, but now it’s so complicated?

NervousTestTaker

I’d say it’s a matter not so much of diligence but of aversion to sharpshooting. What I mean is, I don’t think that people during their careers magically acquire some sort of burning need to make everything perfect if that didn’t exist from the beginning; I don’t think you grow into perfectionism. I think what actually happens is that you grow worried about the expectations of those around you. When you’re a programming neophyte, you’ll proudly announce that you successfully figured out how to write a file to disk and you’d imagine the reaction of your peers to be, “wow, good work figuring that out on your own!” When you’re 10 years in, you’ll announce that you wrote a file to disk and fear that someone will say, “what kind of amateur with 10 years of experience doesn’t guarantee atomicity in a file-write?”

The paralysis by analysis, I think, results from the opinion that every design decision you make should be utterly unimpeachable or else you’ll be exposed as a fraud. You fret that a maintenance programmer will come along and say, “wow, that guy sure sucks,” or that a bug will emerge in some kind of odd edge case and people will think, “how could he let that happen?!” This is what I mean about aversion to sharpshooting. It may even be personal sharpshooting and internal expectations, but I don’t think that the paralysis by analysis occurs as a proactive desire to do a good job but out of a reactive fear of doing a bad job.

(Please note: I have no idea whether this is true of Scott, the original Stack Overflow poster or anyone else individually; I’m just speculating about this general phenomenon that I have observed)

Regaining Your Movement

So, why doesn’t this happen to me? And how might you avoid it? Well my hope is that the answer to the first question is the answer to the second question for you. This doesn’t happen to me for two reasons:

  1. I pride myself not on what value I’ve already added, but what value I can quickly add from here forward.
  2. I make it a point of pride that I only solve problems when they become actual problems (sort of like YAGNI, but not exactly).

Let’s consider the first point as it pertains to the SO poster’s example. Someone tells me that they need an application that, among other things, dumps a file to disk. So, I spend a few minutes calling File.Create() and, hey, look at that — a file is written! Now, if someone comes to me and says, “Erik, this is awful because whenever there are two running processes one of them crashes.” My thought at this point isn’t, “what kind of programmer am I that I wrote this code that has this problem when someone might have been able to foresee this?!?” It’s, “oh, I guess that makes sense — I can definitely fix it pretty quickly.” Expanding to a broader and perhaps less obtuse scope, I don’t worry about the fact that I really don’t think of half of that stuff when dumping something to a file. I feel that I add value as a technologist since even if I don’t know what a random number generator has to do with writing files, I’ll figure it out pretty quickly if I have to. My ability to know what to do next is what sells.

For the second point, let’s consider the same situation slightly differently. I write a file to disk and I don’t think about concurrent access or what on Earth random number generation has to do with what I’m doing. Now if someone offers me the same, “Erik, this is awful because whenever there are two running processes…” I also might respond by saying, “sure, because that’s never been a problem until this moment, but hey, let’s solve it.” This is something I often try to impress upon less experienced developers, particularly about performance. And I’m not alone. I counsel them that performance isn’t an issue until it is — write code that’s clean, clear, and concise and that gets the job done. If at some point users want/need it to be faster, solve that problem then.

This isn’t YAGNI, per se, which is a general philosophy that counsels against writing abstractions and other forms of gold plating because you think that you’ll be glad you did later when they’re needed. What I’m talking about here is more on par with the philosophy that drives TDD. You can only solve one problem at a time when you get granular enough. So pick a problem and solve it while not causing regressions. Once it’s solved, move on to the next. Keep doing this until the software satisfies all current requirements. If a new one comes up later, address it the same as all previous ones — one at a time, as needed. At any given time, all problems related to the code base are either problems that you’ve already solved or problems on a todo list for prioritization and execution. There’s nothing wrong with you or the code if the software doesn’t address X; it simply has yet to be enough of a priority for you to do it. You’ll get to it later and do it well.

There’s a motivational expression that comes to mind about a journey of a thousand miles beginning with a single step (though I’m really more of a despair.com guy, myself). There’s no real advantage in standing still and thinking about how many millions of steps you’re going to need to take. Pick a comfortable pair of shoes, grab some provisions, and go. As long as you pride yourself in the ability to make sure your next step is a good one, you’ll get to where you need to be sooner or later.

By

Visualization Mnemonics for Software Principles

Whether it’s because you want to be able to participate in software engineering discussions without having to surreptitiously look things up on your phone, or whether it’s because you have an interview coming up with a firm that wants you to be some kind of expert in OOP or something, you probably have at least some desire to be knowledgeable about development terms. This is probably doubly true of you, since ipso facto, you read blogs about software.

Toward that end, I’m writing this post. My goal is to provide you with a series of somewhat vivid ways to remember software concepts so that you’ll have a fighting chance at remembering what they’re about sometime later. I’m going to do this by telling a series of stories. So, I’ll get right to it.

Law of Demeter

Last week I was on a driving trip and I stopped by a gas station to get myself a Mountain Dew for the sake or road alertness. I grabbed the soda from the cooler and plopped it down on the counter, prompting the clerk to say, “that’ll be $1.95.” At this point, naturally, I removed my pants and the guy started screaming at me about police and indecent exposure. Confused, I said, “look, I’m just trying to pay you — I’ll hand you my pants and you go rummaging around in my pockets until you find my wallet, which you’ll take out and go looking through for cash. If I’m due change put it back into the wallet, unless it’s a coin, and then just put it in my pocket, and give me back the pants.” He pulled a shotgun out from behind the counter and told me that in his store, people obey the Law of Demeter or else.

PantlessLawOfDemeter

So what does the Law of Demeter say? Well, anecdotally, it says “give collaborators exactly what they’re asking for and don’t give them something they’ll have to go picking through to get what they want.” There’s a reason we don’t hand the clerk our pants (or even our wallet) at the store and just hand them money instead; it’s inappropriate to send them hunting for the money. The Law of Demeter encourages you to think this way about your code. Don’t return Pants and force clients of your method to get what they want by invoking Pants.Pockets[1].Wallet.Money — just give them a Money. And, if you’re the clerk, don’t accept someone handing you a Pants and you going to look for the money — demand the money or show them your shotgun.

Single Responsibility Principle

My girlfriend and I recently bought an investment property a couple of hours away. It’s a little house on a lake that was built in the 1950’s and, while cozy and pleasant, it doesn’t have all of the modern amenities that I might want, resulting in a series of home improvement projects to re-tile floors, build some things out and knock some things down. That kind of stuff.

One such project was installing a garbage disposal, which has two components: plumbing and electrical. The plumbing part is pretty straightforward in that you just need to remove the existing drain pipe and insert the disposal between the drain and the drainage pipe. The electrical is a little more interesting in that you need to run wiring from a switch to the disposal so that you can turn it on and off. Now, naturally, I didn’t want to go to all the hubub of creating a whole different switch, so I decided just to use one that was already there. The front patio light switch had the responsibility for turning the front patio light on and off, but I added a little to its burden, asking it also to control the garbage disposal.

That’s worked pretty well. So far the only mishap occurred when I was rinsing off some dishes and dropped a spoon in the drain while, at the same time, my girlfriend turned the front light on for visitors we were expecting. Luckily, I had only a minor scrape and a mangled spoon, and that’s a small price to pay to avoid creating a whole new light switch. And really, what’s the worst that could happen?

Well, I think you know the worst thing that could happen is that someone loses a hand due to this absurd design. When you run afoul of the Single Responsibility Principle, which could loosely be described as saying “do one thing only and do that thing well” or “have only one reason to change.” In my house, we have two reasons to change the state of the switch: turning on the disposal and turning on the light, and this creates an obvious problem. The parallel situation in code is true as well. If you have a class that needs to be changed whenever schema updates occur and whenever GUI changes occur, then you have a class that serves two masters and the possibility for changes to one thing to affect the other. Disk space is cheap and classes/namespaces/modules are renewable resources. When in doubt, create another one.

Open/Closed Principle

I don’t have a ton of time for TV these days, and that’s mainly because TV is so time consuming. It was a lot simpler when I just had a TV that got an analog signal over the air. But then, things went digital, so I had to take apart my TV and rewire it to handle digital signals. Next, we got cable and, of course, there I am, disassembling the TV again so that we can wire it up to get a cable signal. The worst part of that was that when I became furious with the cable provider and we switched to Dish, I was right back to work on the TV. Now, we have a Nintendo Wii, a DVD player, and a Roku, but who has the time to take the television apart and rewire it to handle each of these additional items? And if that weren’t bad enough, I tried hooking up an old school Sega Genesis last year, and my Dish stopped working.

… said no one, ever. And the reason no one has ever said this is that televisions that you purchase follow the Open/Closed Principle, which basically says that you should create components that are closed to modification, but open for extension. Televisions you purchased aren’t made to be disassembled by you, and certainly no one expects you to hack into the guts of the TV just to plug some device into it. That’s what the Coax/RCA/Component/HDMI/etc feeds are for. With the inputs and the sealed-under-warranty case, your television is open for extension, but closed for modification. You can extend its functionality by plugging anything you like into it, including things not even out yet, like an X-Box 12 or something. Follow this same concept for flexible code. When you write code, you strive to maximize flexibility by facilitating maintenance via extension and new code. If you program to interfaces or allow overriding of behavior via inheritance, life is a lot easier when it comes time to change functionality. So favor that over writing some juggernaut class that you have to go in and modify literally every sprint. That’s icky, and you’ll learn to hate that class and the design around it the same way you’d hate the television I just described.

Liskov Substitution Principle

I’m someone that usually eats a pretty unremarkable salad with dinner. You know, standard stuff: lettuce, tomatoes, crutons, scallions, carrots, and hemlock. One thing that I seem to do differently than most, however, is that I examine each individual item in the salad to see whether or not it will kill me before I put it into my mouth (a lot of other salad consumers seem to play pretty fast and loose with their lives, sheesh). I have a pretty simple algorithm for this. If the item is not hemlock, I eat it. If it is hemlock, I put it onto my plate to throw out later. I highly recommend eating your hemlock salad this way.

Or, you could bear in mind the Liskov Substitution Principle, which basically says that if you’re going to have an inheritance relationship, then derived types should be seamlessly swappable for their base type. So, if I have a salad full of Edibles, I shouldn’t have some derived type, Hemlock, that doesn’t behave the way other Edibles do. Another way to think of this is that if you have a heterogeneous collection of things in an inheritance hierarchy, you shouldn’t go through them one by one and say, “let’s see which type this is and treat it specially.” So, obey the LSP and don’t make hemlock salads for people. You’ll have cleaner code and avoid jail.

Interface Segregation Principle

Thank goodness for web page caching — it’s a life saver. Whenever I go to my favorite dictionary site, expertbeginnerdictionary.com (not a real site if you were thinking of trying it), it prompts me for a word to lookup and, when I type in the word and hit enter, it sends me the dictionary over HTTP, at which time I can search the page text with Ctrl-F to find my word. It takes such a long time for my browser to load the entire English dictionary that I’d be really up a creek without page caching. The only trouble is, whenever a word changes and the cache is invalidated, my next lookup takes forever while the browser re-downloads the dictionary. If only there were a better way…

… and there is. Don’t give me the entire dictionary when I want to look up a word. Just give me that word. If I want to know what “zebra” means, I don’t care what “aardvark” means, and my zebra lookup experience shouldn’t be affected and put at risk by changes to “aardvark.” I should only be depending on the words and definitions that I actually use, rather than the entire dictionary. Likewise, if you’re defining public interfaces in your code for clients, break them into minimum composable segments and let your clients assemble them as needed, rather than forcing the kitchen sink (or dictionary) on them.  The Interface Segregation Principle says that clients of an interface shouldn’t be forced to depend on methods that they don’t use because of the excess, pointless baggage that comes along.  Give clients the minimum that they need.

Dependency Inversion Principle

Have you ever been to an automobile factory?  It’s amazing to watch how these things are made.  They start with a car, and the car assembles its own engine, seats, steering wheel, etc.  It’s pretty amazing to watch.  And, for a real treat, you can watch these sub-parts assemble their own internals.  The engine builds its own alternator, battery, transmission, etc — a breathtaking feat of engineering.  Of course, there’s a downside to everything, and, as cool as this is, it can be frustrating that the people in the factory have no control over what kind of engine the car builds for itself.  All they can do is say, “I want a car” and the car does the rest.

I bet you can picture the code base I’m describing.  A long time ago, I went into detail about this piece of imagery, but I’ll summarize by saying that this is “command and control” programming where constructors of objects instantiate all of the object’s dependencies — FooService instantiates its own logger.  This runs afoul of the Dependency Inversion Principle, which holds that high level modules, like Car, should not depend directly on lower level modules, like Engine, but rather that both should depend on an abstraction of the Car-Engine interaction.  This allows the car and the engine to vary independently meaning that our automobile factory workers actually could have control over which engines go in which cars.  And, as described in the linked post, a code base making heavy use of the Dependency Inversion Principle tends to be composable whereas a command and control style code base is not, favoring instead the “car, build thyself” approach.  So to remember and understand Dependency Inversion principle ask yourself who should control what parts go in your car — the people building the car, or the car itself?  Only one of those ideas is preposterous.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

YAGNI: YAGNI

A while back, I wrote a post in which I talked about offering too much code and too many options to other potential users of your code. A discussion emerged in the comments roughly surrounding the merits of the design heuristic affectionately known as “YAGNI,” or “you ain’t gonna need it.”

In a vacuum, this aphorism seems to be the kind of advice you give a chronic hoarder: “come on, just throw out 12 of those combs — it’s not like you’re going to need them.” So what does it mean, exactly, in the context of programming, and is it good advice in general? After all, if you applied the advice that you’d give a hoarder to someone who wasn’t, you might be telling them to throw out their only comb, which wouldn’t make a lot of sense.

The Motivation for YAGNI

As best I understand, the YAGNI principle is one of the tenets of Extreme Programming (XP), an agile development approach that emerged in the 1990s and stood in stark contrast to the so-called “waterfall” or big-design-up-front approach to software projects. One of the core principles of the (generally unsuccessful) waterfall approach is a quixotic attempt to figure out just about every detail of the code to be written before writing it, and then, afterward, to actually write the code.

I personally believe this silliness is the result of misguided attempts to mimic the behavior of an Industrial Revolution-style assembly line in which the engineers (generally software architects) do all the thinking and planning so that the mindless drones putting the pieces together (developers) don’t have to hurt their brains thinking. Obviously, in an industry of knowledge workers, this is silly … but I digress.

YAGNI as a concept seems well suited to address the problematic tendency of waterfall development to generate massive amounts of useless code and other artifacts. Instead, with YAGNI (and other agile principles) you deliver features early, using the simplest possible implementation, and then you add more and refactor as you go.

YAGNI on a Smaller Scale

But YAGNI also has a smaller scale design component as well, which is evident in my earlier post. Some developers have a tendency to code up classes and add extra methods that they think might be useful to themselves or others at some later point in time. This is often referred to as “gold plating,” and I think the behavior is based largely on the fact that this is often a good idea in day-to-day life.

“As long as I’m changing the lightbulb in this ceiling lamp, I may as well dust and do a little cleaning while I’m up here.”

Or perhaps:

“as long as I’m cooking tonight, I might as well make extra so that I have leftovers and can save time tomorrow.”

But the devil is in the details with speculative value propositions. In the first example, clear value is being provided with the extra task (cleaning). In the second, the value is speculative, but the odds of realization are high and the marginal cost is low. If you’re going to the effort of making tuna casserole, scaling up the ingredients is negligible in terms of effort and provides a very likely time savings tomorrow.

But doesn’t that apply to code? I mean, adding that GetFoo() method will take only a second and it might be useful later.

Well, consider that the planning lifespan for code is different than casserole. With the casserole, you have a definite timeframe in mind — some time in the next few nights — whereas with code, you do not. The timeframe is “down the line.” In the meantime, that code sits in the fridge of your application, aging poorly as people forget its intended purpose.

You wind up with a code-fridge full of containers with goop in them that doesn’t exactly smell bad, but isn’t labeled with a date and you aren’t sure of its origin. And I don’t know about you, but if I were to encounter a situation like that, the reaction would be “I don’t feel like risking it, I’m ordering Chinese.” And, “nope, I’m out of here,” isn’t a good feeling to have when you open your application’s code.

Does YAGNI Always Apply?

So there is an overriding design principle YAGNI, and there is a more localized version — both of which seem to be reactions toward a tendency to comically over-plan and a tendency to gold plate locally, respectively. But does this advice always hold up, reactions notwithstanding?

I’m not so sure. I mean, let’s say that you’re starting on a new .NET web application and you need to test that it displays “hello world.” The simplest thing that could work is F5 and inspection. But now let’s say that you have to give it to a tester. The simplest thing is to take the compiled output and manually copy it to a server. That’s certainly simpler than setting up some kind of automated publish or continuous deployment scenario. And now you’re in sort of a loop, because at what point is this XCopy deploy ever not going to be the simplest thing that could work for deploying? What’s the tipping point?

Getting Away from Sloganeering

Now I’m sure that someone is primed to comment that it’s just a matter of how you define requirements and how stringent you are about quality. “Well, it’s the simplest thing that could possibly work but keeping the overall goals in mind of the project and assuming a baseline of quality and” — gotta cut you off there, because that’s not YAGNI. That’s WITSTTCPWBKTOGIMOTPAAABOQA, and that’s a mouthful.

It’s a mouthful because there’s suddenly nuance.

The software development world is full of metaphorical hoarders, to go back to the theme of the first paragraph. They’re serial planners and pleasers that want a fridge full of every imaginable code casserole they or their guests might ever need. YAGNI is a great mantra for these people and for snapping them out of it. When you go sit to pair or work with someone who asks you for advice on some method, it’s a good reality check to say, “dude, YAGNI, so stop writing it.”

But once that person gets YAGNI — really groks it by adopting a practice like TDD or by developing a knack for getting things working and refactoring to refinement — there are diminishing returns to this piece of advice. While it might still occasionally serve as a reminder/wake-up call, it starts to become a possibly counterproductive oversimplification. I’d say that this is a great, pithy phrase to tack up on the wall of your shop as a reminder, especially if there’s been an over-planning and gold-plating trend historically. But that if you do that, beware local maxima.

By

Let’s Take This Thing Apart

My garbage disposal stopped working recently. I turned it on and could hear only the strained hum of the motor, so naturally I proceeded straight to the internet. I usually do that to see what others think of a situation, though my guess was that something was jamming up the works and preventing the motor from spinning. I went and got a sturdy piece of wood from the scrap pile in my work room and tried to force the disposal to turn, but no dice. That was a few days ago.

Tonight, I decided I’d had enough of using a strainer to gather up food debris and throw it away every day or two. When I had trouble finding work out of college during the dot-com bubble burst, I scraped by and lived in a tiny studio apartment in Chicago with no garbage disposal and one of those freezers that would frost over with ice. I care to repeat nothing of that experience. So I killed the power to the disposal at the circuit breaker and started popping off the plumbing and electrical connections. I hauled the thing down to my basement and set to work on it. As I dealt with the tedious flat-head screw removal and O-ring jockeying involved, my mind began to wander a bit.

One thing that ran through my head was that it had never occurred to me to call someone about this problem, and I started to wonder why not. I’ve never taken apart a garbage disposal before, and I don’t have the extra PVC pipe laying around to hook up as a replacement for the disposal in a worst-case scenario. If I had failed, there would have been no kitchen sink until I succeeded (which would be at least a few days, since I’m at work tomorrow and then taking a brief trip out of town). What’s my deal that I’d just say, “well, let’s start popping screws off and see what happens?”

What I realized is this: I think this mindset is at the core of the engineering mind and drives at what makes programmers successful (and/or driven workaholics). There’s a desire to deconstruct and to understand the core functionality of things. There’s a desire to look at things logically and to understand the procedures for disassembling and reassembling them. There’s a desire to know how the world works, and furthermore there’s a bit of underlying confidence. When it come to operating in that world, you’ll figure something out.

It’s my opinion that when you look at code, it’s important to think that way as well. Why is this framework call slow? Well, let’s disassemble it and find out. Is this method call necessary? Let’s delete it and see what breaks. Can we improve the structure here? Let’s roll up our sleeves and start refactoring. My attitude toward my garbage disposal is the same as my attitude toward code, and I think it’s a decent operating paradigm. Rather than calling in someone when I hit a snag, I use it as a learning opportunity, put in some extra hours, get it done and improve my capabilities. I did that tonight. I’d chiseled out a stuck piece of plastic and hooked everything up for a working disposal by 1 AM. I do things like that with code all the time (often at similar hours).

Of course, there’s a dark side to this. If while working on my plumbing I bend a supply line and somehow cause a leak back behind the drywall, I have pretty big problems. If I have to cut off water to the house and then go to work, it might be “get a hotel room and call a contractor” time. If you don’t know when you’re outgunned by life, you run the risk of going from confident and inquisitive to foolish or at least a bit reckless. I’ve probably crossed that line a few times, particularly in code when I decide I have no use for some legacy namespace and I’ll just rewrite the whole thing myself over the weekend. But I think that, on the whole, if you take the “let’s pull this thing apart and see where it leads us” approach to programming, you’ll probably have a pretty successful career.

By

Born to Exclude: Beware of Monoculture

Understanding the Idea of Monoculture

One morning last week, I was catching up on backlogged podcasts in my Doggcatcher feed on the way to work and was listening to an episode of Hanselminutes where Scott Hanselman interviewed a front end web developer named Garann Means. The subject of the talk was what they described as “developer monoculture,” and they talked about the potentially off-putting effect on would-be developers of asking them to fill a sort of predefined, canned “geek role.”

In other words, it seems as though there has come to be a standard set of “developer things” that developers are expected to embrace: Star Trek, a love of bacon (apparently), etc. Developers can identify one another in this fashion and share some common ground, in a “talk about the weather around the water cooler” kind of way. But this policy seems to go beyond simply inferring a common set of interests and into the realm of considering those interests table stakes for a seat at the geek table. In other words, developers like fantasy/sci-fi, bacon, that Big Bang Theory show (I tried to watch this once or twice and found it insufferable), etc. If you’re a real developer, you’ll like those things too. This is the concept of monoculture.

The podcast had weightier issues to discuss that the simple “you can be a developer without liking Star Trek,” though. It discussed how the expected conformance to some kind of developer archetype might deter people who didn’t share those interests from joining the developer community. People might even suppress interests that are wildly disparate from what’s normally accepted among developers. Also mentioned was that smaller developer communities such as Ruby or .NET trend even more toward their own more specific monoculture. I enjoyed this discussion in sort of an abstract way. Group dynamics and motivation is at least passingly interesting to me, and I do seem to be expected to do or like some weird things simply because I write software for a living.

TypicalCoder

At one point in the discussion, however, my ears perked up as they started to discuss what one might consider, perhaps, a darker side of monoculture–that it can be deliberately instead of accidentally exclusionary. That is, perhaps there’s a point where you go from liking and including people because you both like Star Trek to disliking and excluding people because they don’t. And that, in turn, leads to a velvet rope situation: not only are outsiders excluded, but even those with similar interest are initially excluded until they prove their mettle–hazing, if you will. Garann pointed out this dynamic, and I thought it was insightful (though not specific to developers).

From there, they talked about this velvet-roping existing as a result of people within the inner sanctum feeling that they had some kind of ‘birthright’ to be there, and this is where I departed in what had just been general, passive agreement with the points being made in the podcast. To me, this characterization was inverted–clubhouse sitters don’t exclude and haze people because they of a tribal notion that they were born into an “us” and the outsiders are “them.” They do it out of insecurity, to inflate the value of their own experiences and choices in life.

A Brush with Weird Monoculture and what it Taught Me

When I first met my girlfriend, she was a bartender. As we started dating, I would go to the bar where she worked and sit to have a beer, watch whatever Chicago sports team was playing at the time, and keep her company when it was slow. After a while, I noticed that there was a crowd of bar flies that I’d see regularly. Since we were occupying the same space, I made a few half-hearted efforts to be social and friendly with them and was rebuffed (almost to my relief). The problem was, I quickly learned, that I hadn’t logged enough hours or beers or something to be in the inner circle. I don’t think that this is because these guys felt they had a birthright to something. I think it’s because they wanted all of the beers they’d slammed in that bar over the course of decades to count toward something besides cirrhosis. If they excluded newbies, youngsters, and non-serious drinkers, it proved that the things they’d done to be included were worth doing.

So why would Star-Trek-loving geeks exclude people that don’t know about Star Trek? Well, because they can, with safety in numbers. Also because it makes the things that they enjoy and for which they had likely been razzed in their younger days an asset to them when all was said and done. Scott talked about being good at programming as synonymous with revenge–discovering a superpower ala Spiderman and realizing that the tables had turned. I think it’s more a matter of finding a group that places a radically different value on the same old things that you’ve always liked doing and enjoying that fact. It used to be that your skills and interests were worthless while those of other people had value, but suddenly you have enough compatriots to reposition the velvet rope more favorably and to demonstrate that there was some meaning behind it all. Those games of Dungeons and Dragons all through high school may have been the reason you didn’t date until twenty, but they’re also the reason that you made millions at that startup you founded with a few like-minded people. That’s not a matter of birthright. It’s a matter of desperately assigning meaning to the story of your life.

Perhaps I’ve humanized mindless monoculture a bit here. I hope so, because it’s essentially human. We’re tribal and exclusionary creatures, left to our baser natures, and we’re trying to overcome that cerebrally. But while it may be a sympathetic position, it isn’t a favorable or helpful one. We can do better. I think that there are two basic kinds of monoculture: inclusive, weather-conversation-like inanity monoculture (“hey, everyone loves bacon, right?!?”) and toxic, exclusionary self-promotion. In the case of the former, people are just awkwardly trying to be friendly. I’d consider this relatively harmless, except for the fact that it may inadvertently make people uncomfortable here and there.

The latter kind of monoculture dovetails heavily into the kind of attitudes I’ve talked about in my Expert Beginner series of posts, where worth is entirely identity-based and artificial. I suppose I perked up so much at this podcast because the idea of putting your energy into justifying why you’ve done enough, rather than doing more, is fundamental to the yet-unpublished conclusion of that series of posts. If you find yourself excluding, deriding, hazing, or demanding dues-paying of the new guy, ask yourself why. I mean really, critically ask yourself. I bet it has everything to do with you (“I went through it too,” and, “it wouldn’t be fair for someone just to walk in and be on par with me”) and nothing to do with him. Be careful of this, as it’s the calling card of small-minded mediocrity.