DaedTech

Stories about Software

By

Bike Sheds, Ducks, and Nuclear Reactors

The other day, I learned a new term, and it was great. I enjoyed this so much because it was a term for a concept I was familiar with but for I had never had a word. I haven’t been this satisfied with a new word since “taking pleasure in seeing bad things happen to your enemies” was replaced with “schadenfreude.” I was listening to an episode of the Herding Code podcast from this past summer, and Nik Molnar described something called “bike shedding.”

This colloquialism is derived from an argument made by a man named C. Northcote Parkinson and later called “Parkinson’s Law of Triviality” that more debate within organizations surrounds trivial issues than extremely important ones.

Parkinson’s Law, Explained

Here’s a quick recap of his argument. He discusses a fictional committee with three items on its agenda: approving an atomic reactor, approving a bike shed, and approving a year’s supply of refreshments for committee meetings. The reactor is approved with almost no discussion since it is so staggeringly complicated and expensive than no one can really wrap their heads around it. A lot more arguing will be done about the bike shed, since committee members are substantially more likely to understand the particulars of construction, the cost of materials, etc. The most arguing, however, will be reserved for the subject of which drinks to have a the next meeting, since everyone understands and has an opinion about that.

This is human nature as I’ve experienced it, almost without exception. I can’t tell you how many preposterous meeting discussions have shaved precious hours off of my life while people argue about the most ridiculous things. One of the most common that comes to mind is people who are about to drive somewhere 20 minutes away spending 5-10 minutes arguing about which streets to take en route.

Introducing a Duck

In the life of a programmer, this phenomenon often has special and specific significance. On the Wikipedia page I linked, I also noticed a reference to a fun Jeff Atwood post about awesome programming terms that had been coined by respondents to an old Stack Overflow question. Take a look at number five, a “duck”:

A feature added for no other reason than to draw management attention and be removed, thus avoiding unnecessary changes in other aspects of the product.

I don’t know if I actually invented this term or not, but I am certainly not the originator of the story that spawned it.

This started as a piece of Interplay corporate lore. It was well known that producers (a game industry position, roughly equivalent to PMs) had to make a change to everything that was done. The assumption was that subconsciously they felt that if they didn’t, they weren’t adding value.

The artist working on the queen animations for Battle Chess was aware of this tendency, and came up with an innovative solution. He did the animations for the queen the way that he felt would be best, with one addition: he gave the queen a pet duck. He animated this duck through all of the queen’s animations, had it flapping around the corners. He also took great care to make sure that it never overlapped the “actual” animation.

Eventually, it came time for the producer to review the animation set for the queen. The producer sat down and watched all of the animations. When they were done, he turned to the artist and said, “that looks great. Just one thing – get rid of the duck.”

Ducks, Applied

When I saw this, I actually guffawed at my desk. I didn’t do that just because I found this funny — I have actually employed this strategy in the past, and I see that I am not alone in my rank cynicism in a world of Parkinson’s Law of Triviality. It definitely seems that project management types, particularly ones that were never technical (or at least never any good at it), feel a great need to seize upon something that they can understand and offer an opinion on that thing in the form of an edict in order basically to assert some kind of dominance.

So, early in my career, when proposing technical plans I developed a habit of dropping a few red herring mistakes into the mix to make sure that obligatory posturing was dispensed with and any remaining criticisms were legitimate — I’d purposely do something like throw in a “we’re not going to be doing any documentation because of a lack of time” to which I would receive the admonition, “documentation is important, so add it in.” “Okie-dokie. Moving on.”

NuclearDuck

Bike Shedding in the Developer World

It isn’t just pointy-haired fire-hydrant peeing that exposes us to this phenomenon, however. We, as developers, are some of the worst offenders. The Wikipedia article also offers up something called “Wadler’s Law,” which appears to be corollary to Parkinson’s in that it talks about developers being more likely to argue over language syntax than semantics.

In other words, you’ll get furious arguments over whether to use underscores between words in function names, but often hear crickets when you ask if the function is part of a consistent broader abstraction. My experience aligns with this as well. I can think of so, so many depressing code reviews that were all about “that method doesn’t have a doc comment” or “why are you using var” or “alphabetize your includes.” I’d offer up things like “let’s look at how cohesive the types are in this namespace,” and, again, crickets.

The great thing about opinions is that they’re an endlessly renewable, free resource. You can have as many as you like about anything you like, and no one can tell you not to (at least in societies that aren’t barbarically oppressive).

But what isn’t endless is people’s interest in your opinions. If you’re the person that’s offering one up intensely and about every subject as the conversation drifts from code to personal finances to football to craft beers, you’re devaluing your own currency. As you discuss and debate, be mindful of the stakes at play and be sparing when it comes to how many times you sit at the penny slots. After all, you don’t want to be the one remembered for furiously debating camel case and coffee flavors while rubber stamping the plans for Chernobyl.

By

Beware of The Magnetars in Your Codebase

Lately, I’ve been watching a lot of “How the Universe Works” and other similar shows about astronomy. I’ve been watching them a lot, as in, I think I have some kind of problem. I want to watch them and find them fascinating and engaging and yet I also seem suddenly to be unable to fall asleep without them on.

Last night, I was indulging this strange problem when I saw what has to be the single most intense thing in the universe: a magnetar. Occasionally when a massive star runs out of fuel in its core, it explodes into as a supernova and spews matter and radiation everywhere, sending concussive shock waves hurtling out into the universe. In the aftermath, the rest of the star that doesn’t escape out collapses in on itself into an unimaginably dense thing called a “neutron star,” which is the size of Manhattan but weighs as much as the sun (for perspective, a sugar cube of neutron star would weigh as much as all of the people on earth combined).

One particularly exotic type of neutron star is called a magnetar. It’s a neutron star with a magnetic field of absolutely mind-boggling strength and a crust made out of solid iron (but up to 10 billion times stronger than steel, thanks to the near-black-hole-like gravity of the star crushing imperfections out of the crystals that form the crust). A magnetar is so intensely magnetized that if the moon were a magnetar (and forget the gravity for a moment) it would tear the watch off of your wrist and render your credit cards useless. This thing rotates many times per second, whipping its magnetic field into a frenzy and sloshing the ultra-dense neutron goo that makes up its core into a froth until the internal pressure causes something called a “starquake,” which, if it were measured on a the Richter scale, would be a 32. When these starquakes happen, the result is that the magnetar spews a torrent of radiation so powerful that it has a profound effect on the earth’s magnetic field and atmosphere from halfway across the Milky Way.

So to recap, a magnetar is a tiny thing leftover from a huge event that’s not really visible or particularly noticeable from a distance. At least, it isn’t noticeable until the unimaginable destructive force roiling in its bowels is randomly unleashed, and then it pretty much annihilates anything in its close vicinity and has a profound effect universally.

Magnetar

Image courtesy of wikipedia

I was idly thinking about this concept today while looking at some code, and I realized something. How many projects do you work on where there’s some kind of scramble or to get some new feature in ahead of schedule, to absorb scope creep and last minute changes, or to slam some kind of customization into production for a big client with a minimum of testing? Whether this goes well or poorly, the result is generally spectacular.

And when the dust settles and everyone has taken their two or three weeks off, come down from the ledge and breathed a sigh of relief, the remnants of the effort is often some quiet, dense, unapproachable and dangerous bit of code pulsing in the middle of your code base. You don’t get too near it for fear that it will tear the watch off of your wrist or result in a starquake — okay, more accurately, that it will introduce some nasty regression bug — and you just kind of leave it there to rotate and pulse ominously.

Much later, when you’ve pretty well forgotten it, it erupts and unleashes a torrent of devastation into your team’s life. One day you suddenly recall (one day too late) that if you don’t log into that one SQL server box and restart that scheduled task on any March 1st not in a leap year, all 173,224 users of the Initrode account are suddenly unable to log into anything in their ERP system, and they’re planning a shipment of medical supplies to hurricane victims and abused puppies. You’ve had all of the atoms in your organization pulverized out of existence by the flare of a magentar in your code base.

How do you avoid this fate? I’ll give you a list of two:

  1. Do the right thing now.
  2. Push back against creating the situation in the first place.

The first one is the more politically tenable one in organizations. The business is going to do what the business is going to do, and that’s to allow sales to promise clients a cure for cancer by June 15th if they promise to pitch in personally for steak dinners for the dev team, on their honor. It can be hard to push back against it, so what you can do is ride the storm out and then make sure you carve out time to repair the damage when the dust settles. Don’t let that rogue task threaten your very existence once a year (but not in leap years). And don’t cop out by documenting it on a wiki somewhere. Do the right thing and write some code that automates whatever it is that should trigger it to happen. While you’re at it, automate some sort of reminder scheme for monitoring purposes and some fault tolerance, since this seems pretty important. You may have needed to hack something out to meet your deadline, but there’s nothing saying you have to live with that and let it spin and pulse its way to bursting anger.

The better solution, though, is to push back on the business and not let supernovae into your development process in the first place. This is hard, but it’s the right path. Instead of disarming volatile things that you’ve introduced in a pinch, avoid introducing them altogether. Believe it or not, this is a skill that actually takes practice because it involves navigating office-political terrain beyond simply explaining things to someone in rational fashion and prevailing upon their good judgment.

But really, I could boil these two points down to one single thing that logically implies both: care about the fate of the project and the codebase. If you invest yourself in it and truly care about it, you’ll find that you naturally avoid letting people introduce explosive forces in the first place. You certainly don’t allow alien, stealth deathbombs to fester in it, waiting to spew radiation at you. Metaphorical radiation, that is. Unless you code for a nuclear power company. Then, real radiation.

By

The 7 Habits of Highly Overrated People

I remember having a discussion with a more tenured coworker, with the subject being the impending departure of another coworker. I said, “man, it’s going to be rough when he leaves, considering how much he’s done for us over the last several years.” The person I was talking to replied in a way that perplexed me. He said, “when you think about it, though, he really hasn’t done anything.” Ridiculous. I immediately objected and started my defense:

Well, in the last release, he worked on… that is, I think he was part of the team that did… or maybe it was… well, whatever, bad example. I know in the release before that, he was instrumental in… uh… that thing with the speed improvement stuff, I think. Wait, no, that was Bill. He did the… maybe that was two releases ago, when he… Holy crap, you’re right. He doesn’t do anything!

How did this happen? Meaning, how did I get this so wrong? Am I just an idiot? It could be, except that fails as an explanation for this particular case because the next day` I talked to someone who said, “boy, we’re sure going to miss him.” It seemed I was not alone in just assuming that this guy had been an instrumental cog in the work of the group when he had really, well, not been.

In the time that has passed since that incident, I’ve paid attention to people in groups and collaborating on projects. I’ve had occasion to do this as a team member and a team lead, as a boss and a line employee, as a consultant and as a team member collaborating with consultants, and just about everything else you can think of. And what I’ve observed is that this phenomenon is not a function of the people who have been fooled but the person doing the fooling. When you look at people who wind up being highly overrated, they share certain common habits.

If you too want to be highly overrated, read on. Being overrated can mean that you’re mediocre but people think that you’re great, or it can mean that you’re completely incompetent but nestle in somewhere and go unnoticed, doing, as Peter Gibbons in Office Space puts it, “just enough not to get fired.” The common facet is that there’s a sizable deficit between your actual value and your perceived value — you appear useful while actually being relatively useless. Here’s how.

TomSawyer

1. “Overcommunicate”

I’m putting this term in quotes because it was common enough at one place I worked to earn a spot on a corporate BS Bingo card, but I’ve never heard it anywhere else. I don’t know exactly what people there meant by it, and for all I know, neither do they, so I’m going to reappropriate it here. If you want to seem productive without doing anything useful, then a great way to do so is to make lots of phone calls, send lots of emails, create lots of memos, etc.

A lot of people mistake activity for productivity, and you can capitalize on that. If you send one or two emails a day, summarizing what’s going on with a project in excruciating detail, people will start to think of you as that vaguely annoying person who has his fingers on the pulse all of the time. This is an even better strategy if you make the rounds, calling and talking to people to get status updates as to what they’re doing before sending an email.

Now, I know what you’re thinking — that might actually be productive. And, well, it might be, nominally so. But do you notice that you’ve got a very tangible plan of action here and there’s been no mention of what the project actually involves? A great way to appear useful without being useful is engage heavily in an activity completely orthogonal to the actual goal.

2. Be Bossy and Critical

Being an “overcommunicator” is a good start, but you can really drive your phantom value home by ordering people around and being hypercritical. If your daily (or hourly) status report is well received, just go ahead and start dropping instructions in for the team members. “We’re getting a little off schedule with our reporting, so Jim, it’d be great if you could coordinate with Barbara on some checks for report generation.” Having your finger on the pulse is one thing, but creating the pulse is a lot better. Now, you might wind up getting called out on this if you’re in no position of actual authority, but I bet you’d be surprised how infrequently this happens. Most people are conflict avoiders and reconcilers and you can use that to your advantage.

But if you do get called out (or even if you don’t), just get hypercritical. “Oh my God, Jim and Barbara, what is up with the reports! Am I going to have to take this on myself?!” Don’t worry about doing the actual work yourself — that’s not part of the plan. You’re just making it clear that you’re displeased and using a bit of shaming to get other people to do things. This shuts up people inclined to call you out on bossiness because they’re going to become sidetracked by getting defensive and demonstrating that they are, in fact, perfectly capable of doing the reports.

3. Shamelessly Self Promote

If a deluge of communication and orders and criticisms aren’t enough to convince people how instrumental you are, it never hurts just to tell them straight out. This is sort of like “fake it till you make it” but without the intention of getting to the part where you “make it.” Whenever you send out one of your frequent email digests, walk around and tell people what hard work it is putting together the digests and saying things like, “I’d rather be home with my family than staying until 10 PM putting those emails together, but you know how it is — we’ve all got to sacrifice.” Don’t worry, the 10:00 part is just a helpful ’embellishment’ — you don’t actually need to do things to take credit for them (more on that later).

Similarly, if you are ever subject to any criticisms, just launch a blitzkrieg of things that you’ve done at your opponent and suggest that everyone can agree how awesome those things are. List every digest email you’ve sent over the last month, and mention the time you sent each one. By the fifth or sixth email, your critic will just give up out of sheer exasperation and agree that your performance has been impeccable.

4. Distract with Arguments about Minutiae

If you’re having trouble making the mental leap to finding good things about your performance to mention, you can always completely derail the discussion. If someone mentions that you haven’t checked in code in the last month, just point out that in the source control system you’re using, technically, “check in” is not the preferred verbiage. Rather, you “promote code.” The distinction may not seem important, but the importance is subtle. It really goes to the deeper philosophy of programming or, as some might call it, “the art of software engineering.” Now, when you’ve been doing this as long as I have, you’ll understand that code promotions… ha! You no longer have any idea what we were talking about!

This technique is not only effective for deflecting criticism but also for putting the brakes on policy changes that you don’t like and your peers getting credit for their accomplishments. Sure, Susan might have gotten a big feature in ahead of schedule, but a lot of her code is using a set of classes that some have argued should be deprecated, which means that it might not be as future-proof as it could. Oh, and you’ve run some time trials and feel like you could definitely shave a few nanoseconds off of the code that executes between the database read and the export to a flat file.

5. Time It So You Look Good (Or Everyone Else Looks Bad)

If you ever wind up in the unfortunate position of having to write some code, you can generally get out of it fairly easily. The most tried and true way is for the project to be delayed or abandoned, and you can do your part to make that happen while making it appear to be someone else’s fault. One great way to do that is to create a huge communication gap that appears to be everyone’s fault but yours.

Here’s what I mean. Let’s say that you’re working with Bill and Bill goes home every night at 6:00 PM. At 6:01, send Bill an email saying that you’re all set to get to work, but you just need the class stub from him to get started. Sucker. Now 15 hours are going to pass where he’s the bottleneck before he gets in at 9:00 the next morning and responds. If you’re lucky and you’ve buried him in digest emails, you might even get an extra hour or two.

If Bill wises to your game and stays a few extra minutes, start sending those emails at like 10:00 PM from home. After all, what’s it to you? It takes just as little effort not to work at 6:00 as it does at 10:00. Now, you’ve given up a few hours of response time, but you’re still sitting pretty at 11 hours or so, and you can now show people that you work pretty much around the clock and that if you’re going to be saddled with an idiot like Bill that waits 12 hours to get you critical information, you pretty much have to work around the clock.

6. Plan Excuses Ahead of Time

This is best explained with an example. Many years ago, I worked as lead on a project with an offshore consultant who was the Rembrandt of pre-planned excuses. This person’s job title was some variant of “Software Engineer” but I’m not sure I ever witnessed software or engineering even attempted. One morning I came in and messaged him to see if he’d made progress overnight on a task I’d set him to work on. He responded by asking if I’d seen his email from last night. I hadn’t, so I checked. It said, “the clock is wrong, and I can’t proceed — please advise.”

After a bit of back and forth, I came to realize that he was referring to the clock in the taskbar on his desktop. I asked him how this could possibly be relevant and what he told me was that he wasn’t sure how the clock being off might affect the long-running upload that was part of the task, and that since he wasn’t familiar with Slackware Linux, he didn’t know how to adjust the clock. I kid you not. A “software engineer” couldn’t figure out how to change the time on his computer and thought that this time being wrong would adversely affect an upload that in no way depended on any kind of timestamp or notion of time. That was his story, and he was sticking with it.

And it is actually perfect. It’s exasperating but unassailable. After all, he was a “complete expert in Windows and several different distributions of Linux,” but Slackware was something he hadn’t been trained in, so how could he possibly be expected to complete this impossible task without me giving him instructions? And, going back to number five, where had I been all night, anyway? Sleeping? Pff.

7. Take Credit in Non-Disprovable Ways

The flip side of pre-creating explanations for non-productivity so that you can sit back in a metaphorical hammock and be protected from accusations of laziness is to take credit inappropriately, but in ways that aren’t technically wrong. A good example of this might be to make sure to check in a few lines of code to a project that appears as though it will be successful so that your name automatically winds up on the roster of people at the release lunch. Once you’re at that lunch, no one can take that credit away from you.

But that’s a little on the nose and not overly subtle. After all, anyone looking can see that you added three lines of white space, and objective metrics are not your friends. Do subjective things. Offer a bunch of unsolicited advice to people and then later point out that you offered “leadership and mentoring.” When asked later at a post mortem (or deposition) whether you were a leader on the project, people will remember those moments and say, grudgingly and with annoyance, “yeah, I guess you could say that.” And, that’s all you’re after. If you’re making sure to self-promote as described in section three, all you really need here is a few people that won’t outright say that you’re lying when asked about your claims.

Is This Really For You?

Let me tell you something. If you’re thinking of doing these things, don’t. If you’re currently doing them, stop. I’m not saying this because you’ll be insufferable (though you will be) and I want to defend humanity from this sort of thing. I’m offering this as advice. Seriously. These things are a whole lot more transparent than the people who do them think they are, and acting like this is a guaranteed way to have a moment in life where you wonder why you’ve bounced around so much, having so much trouble with the people you work with.

A study I read once of the nature of generosity said that appearing generous conferred an evolutionary advantage. Apparently generous people were more likely to be the recipients of help during lean times. It also turned out that the best way to appear generous was actually to be generous since false displays of generosity were usually discovered and resulted in ostracism and a substantially worse outcome than even simply being miserly. It’s the same thing in the workplace with effort and competence. If you don’t like your work or find it overwhelming, then consider doing something else or finding an environment that’s more your speed rather than being manipulative or playing games. You and everyone around you will be better off in the end.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

Wasted Talent: The Tragedy of the Expert Beginner

Back in September, I announced the Expert Beginner e-book. In that same post, I promised to publish the conclusion to the series around year-end, so I’m now going to make good on that promise. If you like these posts, you should definitely give the e-book a look, though. It’s more than just the posts strung together — it shuffles the order, changes the content a touch, and smooths them into one continuous story.

But, without further ado, the conclusion to the series:

The real, deeper sadness of the Expert Beginner’s story lurks beneath the surface. The sinking of the Titanic is sharply sad because hubris and carelessness led to a loss of life, but the sinking is also sad in a deeper, more dull and aching way because human nature will cause that same sort of tragedy over and over again. The sharp sadness in the Expert Beginner saga is that careers stagnate, culminating in miserable life events like dead-end jobs or terminations. The dull ache is endlessly mounting deficit between potential and reality, aggregated over organizations, communities and even nations. We live in a world of “ehhh, that’s probably good enough,” or, perhaps more precisely, “if it ain’t broke, don’t fix it.”

There is no shortage of literature on the subject of “work-life balance,” nor of people seeking to split the difference between the stereotypical, ruthless executive with no time for family and the “aim low,” committed family type that pushes a mop instead of following his dream, making it so that his children can follow theirs. The juxtaposition of these archetypes is the stuff that awful romantic dramas starring Katherine Heigl or Jennifer Lopez are made of. But that isn’t what I’m talking about here. One can intellectually stagnate just as easily working eighty-hour weeks or intellectually flourish working twenty-five-hour ones.

I’m talking about the very fabric of Expert Beginnerism as I defined it earlier: a voluntary cessation of meaningful improvement. Call it coasting or plateauing if you like, but it’s the idea that the Expert Beginner opts out of improvement and into permanent resting on one’s (often questionable) laurels. And it’s ubiquitous in our society, in large part because it’s encouraged in subtle ways. To understand what I mean, consider institutions like fraternities and sororities, institutions granting tenure, multi-level marketing outfits, and often corporate politics with a bias toward rewarding loyalty. Besides some form of “newbie hazing,” what do these institutions have in common? Well, the idea that you put in some furious and serious effort up front (pay your dues) to reap the benefits later.

This isn’t such a crazy notion. In fact, it looks a lot like investment and saving the best for last. “Work hard now and relax later” sounds an awful lot like “save a dollar today and have two tomorrow,” or “eat all of your carrots and you can enjoy dessert.” For fear of getting too philosophical and prying into religion, this gets to the heart of the notion of Heaven and the Protestant Work Ethic: work hard and sacrifice in the here and now, and reap the benefits in the afterlife. If we aren’t wired for suffering now to earn pleasure later, we certainly embrace and inculcate it as a practice, culturally. Who is more a symbol of decadence than the procrastinator–the grasshopper who enjoys the pleasures of the here and now without preparing for the coming winter? Even as I’m citing this example, you probably summon some involuntary loathing for the grasshopper for his lack of vision and sobriety about possible dangers lurking ahead.

A lot of corporate culture creates a manufactured, distorted version of this with the so-called “corporate ladder.” Line employees get in at 8:30, leave at 5:00, dress in business-casual garb, and usually work pretty hard or else. Managers stroll in at 8:45 and sometimes cut out a little early for this reason or that. They have lunches with the corporate credit card and generally dress smartly, but if they have to rush into the office, they might be in jeans on a Thursday and that’s okay. C-level executives come and go as they please, wear what they want, and have you wear what they want. They play lots of golf.

There’s typically not a lot of illusion that those in the positions of power work harder than line employees in the sense that they’re down operating drill presses, banging out code, doing data entry, crunching numbers, etc. Instead, these types are generally believed to be the ones responsible for making the horrible decisions that no one else would want to make and never being able to sleep because they are responsible for the business 24/7. In reality, they probably whack line employees without a whole lot of worry and don’t really answer that call as often as you think. Life gets sweeter as you make your way up, and not just because you make more money or get to boss people around. The C-level executives…they put in their time working sixty-hour weeks and doing grunt work specifically to get the sweet life. They earned it through hard work and sacrifice. This is the defining narrative of corporate culture.

But there’s a bit of a catch here. When we culturally like the sound of a narrative, we tend to manufacture it even when it might not be totally realistic. For example, do we promote a programmer who pours sixty hours per week into his job for five years to manager because he would be good at managing people or because we like the “work hard, get rewarded” story? Chicken or egg? Do we reward hard work now because it creates value, or do we manufacture value by rewarding it? I’d say, in a lot of cases, it’s fairly ingrained in our culture to be the latter.

In this day and age, it’s easy to claim that my take here is paranoid. After all, the days of fat pensions and massive union graft have fallen by the wayside, and we’re in some market, meritocratic renaissance, right? Well, no, I’d argue. It’s just that the game has gotten more distributed and more subtle. You’ll bounce around between organizations, creating the illusion of general market merit, but in reality, there is a form of subconscious collusion. The main determining factor in your next role is your last role. Your next salary will probably be five to ten percent more than your last one. You’re on the dues-paying train, even as you bounce around and receive nominally different corporate valuations. Barring aberration, you’re working your way, year in and year out, toward an easier job with nicer perks.

But what does all of this have to do with the Expert Beginner? After all, Expert Beginners aren’t CTOs or even line managers. They’re, in a sense, just longer-tenured grunts that get to decide what source control or programming language to use. Well, Expert Beginners have the same approach, but they aim lower in the org chart and have a higher capacity for self-delusion. In a real sense, management and executive types are making an investment of hard work for future Easy Street, whereas Expert Beginners are making a more depressing and less grounded investment in initial learning and growth for future stagnation. They have a spasm of marginal competence early in their careers and coast on the basis of this indefinitely, with the reward of not having to think or learn and having people defer to them exclusively because of corporate politics. As far as rewards go, this is pretty Hotel California. They’ve put in their time, paid their dues, and now they get to reap only the meager rewards of intellectual indolence and ego-fanning.

In terms of money and notoriety, there isn’t much to speak of either. The reward they receive isn’t a Nobel Prize or a world championship in something. It’s not even a luxury yacht or a star on the Walk of Fame. We have to keep getting more modest. It’s not a six bedroom house with a pool and a Lamborghini. It’s probably just a run-of-the-mill upper middle class life with one nice vacation per year and the prospect of retiring and taking that trip they’ve always wanted, a visit to Rome and Paris. They’ve sold their life’s work, their historical legacy, and their very existence for a Cadillac, a nice set of woods and irons, a tasteful ranch-style house somewhere warm, and trans-Atlantic flight or two in retirement. And that–that willingness to have a low ceiling and that short-changing of one’s own potential–is the tragedy of the Expert Beginner.

Expert Beginners are not dumb people, particularly given that they tend to be knowledge workers. They are people who started out with a good bit of potential–sometimes a lot of it. They’re the bowlers who start at 100 and find themselves averaging 150 in a matter of weeks. The future looks pretty bright for them right up until they decide not to bother going any further. It’s as if Michael Jordan had decided that playing some pretty good basketball in high school was better than what most people did, or if Mozart had said, “I just wrote my first symphony, which is more symphonies than most people write, so I’ll call it a career.” Of course, most Expert Beginners don’t have such prodigious talent, but we’ll never hear about the accomplishment of the rare one that does. And we’ll never hear about the more modest potential accomplishments of the rest.

At the beginning of the saga of the Expert Beginner, I detailed how an Expert Beginner can sabotage a group and condemn it to a state of indefinite mediocrity. But writ large across a culture of “good enough,” the Tragedy of the Expert Beginner stifles accomplishments and produces dull tedium interrupted only by midlife crises. En masse in our society, they’ll instead be taking it easy and counting themselves lucky that their days of proving themselves are long past. And a shrinking tide lowers all boats.

By

Faking An Interface Mapping in Entity Framework

Entity Framework Backstory

In the last year or so, I’ve been using Entity Framework in the new web-based .NET solutions for which I am the architect. I’ve had some experience with different ORMs on different technology stacks, including some roll your own ones, but I have to say that Entity Framework and its integration with Linq in C# is pretty incredible. Configuration is minimal, and the choice of creating code, model or database first is certainly appealing.

One gripe that I’ve had, however, is that Entity Framework wants to bleed its way up past your infrastructure layer and permeate your application. The path of least resistance, by far, is to let Entity Framework generate entities and then use them throughout the application. This is often described, collectively, as “the model,” and it impedes your ability to layer the application. The most common form of this that I’ve seen is simply to have Entity Framework context and allow that to be accessed directly in controllers, code behind or view models. In a way, this is actually strangely reminiscent of an Active Record except that the in memory joins and navigation operations are a lot more sophisticated. But the overarching point is still the same — not a lot of separation of functionalities and avoidance of domain modeling in favor of letting the database ooze its way into your code.

CakeAndEatItToo

I’m not a fan of this approach, and I very much prefer applications that are layered or otherwise separated in terms of concerns. I like there to be a presentation layer for taking care of presenting information to the user, a service layer to serve as the application’s API (flexibility and for acceptance testing), a domain layer to handle business rules, and a data access layer to manage persistence (Entity Framework actually takes care of this layer as-is). I also like the concept of “persistence ignorance” where the rest of the application doesn’t have to concern itself where persistent data is stored — it could be SQL Server, Oracle, a file, a web service… whatever. This renders the persistence model and ancillary implementation detail which, in my opinion, is what it should be.

A way to accomplish this is to use the “Repository Pattern,” in which higher layers of the application are aware of a “repository” which is an abstraction that makes entities available. Where they come from to be available isn’t any concern of those layers — they’re just there when they’re needed. But in a lot of ways with Entity Framework, this is sort of pointless. After all, if you hide the EF-generated entities inside of a layer, you don’t get the nice query semantics. If you want the automation of Entity Framework and the goodness of converting Linq expressions to SQL queries, you’re stuck passing the EF entities around everywhere without abstraction. You’re stuck leaking EF throughout your code base… or are you?

Motivations

Here’s what I want. I want a service layer (and, of course, presentation layer components) that is in no way whatsoever aware of Entity Framework. In the project in question, we’re going to have infrastructure for serializing to files and calling out to web services, and we’re likely to do some database migration and using of NoSQL technologies. It is a given that we need multiple persistence models. I also want at least two different version of the DTOs: domain objects in the domain layer, hidden under the repositories, and model objects for binding in the presentation layer. In MVC, I’ll decorate the models with validation attributes and do other uniquely presentation layer things. In the WPF world, these things would implement INotifyPropertyChanged.

Now, it’s wildly inappropriate to do this to the things generated by EntityFramework and to have the domain layer (or any layer but the presentation layer) know about these presentation layer concepts: MVC Validation, WPF GUI events, etc. So this means that some kind of mapping from EF to models and vice-versa is a given. I also want rich domain objects in the domain layer for modeling business logic. So that means that I have two different representations of any entity in two different places, which is a classic case for polymorphism. The question then becomes “interface implementation or inheritance?” And I choose interface.

My reasoning here is certainly subject to critique, but I’m a fan of creating an assembly that contains nothing but interfaces and having all of my layer assemblies take a dependency on that. So, the service layer, for instance, knows nothing about the presentation layer, but the presentation layer also knows nothing about the service layer. Neither one has an assembly reference to the other. A glaring exception to this is DTOs (and, well, custom exceptions). I can live with the exceptions, but if I can eliminate the passing around of vacuous property bags, then why not? Favoring interfaces also helps with some of the weirdness of having classes in various layers inherit from things generated by Entity Framework, which seems conspicuously fragile. If I want to decorate properties with attributes for validation and binding, I have to use EF to make sure that these things are virtual and then make sure to override them in the derived classes. Interfaces it is.

Get Ur Dun!

So that’s great. I’ve decided that I’m going to use ICustomer instead of Customer throughout my application, auto-generating domain objects that implement an interface. That interface will be generated automatically and used by the rest of the application, including with full-blown query expression support that gets translated into SQL. The only issue with this plan is that every google search that I did seemed to suggest this was impossible or at least implausible. EF doesn’t support that, Erik, so give it up. Well, I’m nothing if not inappropriately stubborn when it comes to bending projects to my will. So here’s what I did to make this happen.

I have three projects: Poc.Console, Poc.Domain, and Poc.Types. In Domain, I pointed an EDMX at my database and let ‘er rip, generating the T4 for the entities and also the context. I then copied the Entity T4 template to the types assembly, where I modified it. In types, I added an “I” to the name of the class, changed it to be an interface instead of a class, removed all constructor logic, removed all complex properties and navigation properties, and removed all visibilities. In the domain, I modified the entities to get rid of complex/navigation properties and added an implementation of the interface of the same name. So at this point, all Foo entities now implement an identical IFoo interface. I made sure to leave Foo as a partial because these things will become my domain objects.

With this building, I wrote a quick repository POC. To do this, I installed the nuget package for System.Dynamic.Linq, which is a really cool utility that lets you turn arbitrary strings into Linq query expressions. Here’s the repository implementation:

public class Repository<TEntity, TInterface> where TEntity : class, new() where TInterface : class
{
    private PlaypenDatabaseEntities _context;

    /// 

    /// Initializes a new instance of the Repository class.
    ///

/// public Repository(PlaypenDatabaseEntities context) { _context = context; } public IEnumerable Get(Expression<Func<TInterface, bool>> predicate = null) { IQueryable entities = _context.Set(); if (predicate != null) { var predicateAsString = predicate.Body.ToString(); var parameterName = predicate.Parameters.First().ToString(); var parameter = Expression.Parameter(typeof(TInterface), predicate.Parameters.First().ToString()); string stringForParseLambda = predicateAsString.Replace(parameterName + “.”, string.Empty).Replace(“AndAlso”, “&&”).Replace(“OrElse”, “||”); var newExpression = System.Linq.Dynamic.DynamicExpression.ParseLambda<TEntity, bool>(stringForParseLambda, new[] { parameter }); entities = entities.Where(newExpression); } foreach (var entity in entities) yield return entity as TInterface; } }

Here’s the gist of what’s going on. I take an expression of IFoo and turn it into a string. I then figure out the parameter’s name so that I can strip it out of the string, since this is the form that will make ParseLambda happy. Along these same lines, I also need to replace “AndAlso” and “OrElse” with “&&” and “||” respectively. The former format is the how expressions are compiled, but ParseLambda looks for the more traditional expression combiners. Once it’s in a pleasing form, I parse it as a lambda, but with type Foo instead of IFoo. That becomes the expression that EF will use. I then query EF and cast the results back to IFoos.

Now, I’ve previously blogged that casting is a failure of polymorphism. And this is like casting on steroids and some hallucinogenic drugs for good measure. I’m not saying, “I have something the compiler thinks is an IFoo but I know is a Foo,” but rather, “I have what the compiler thinks of as a non-compiled code scheme for finding IFoos, but I’m going to mash that into a non-compiled scheme for finding Foos in a database, force it to happen and hope for the best.” I’d be pretty alarmed if not for the fact that I was generating interface and implementation at the same time, and that if I define some other implementation to pass in, it must have any and all properties that Entity Framework is going to want.

This is a proof of concept, and I haven’t lived with it yet. But I’m certainly going to try it out and possibly follow up with how it goes. If you’re like me and were searching for the Holy Grail of how to have DbSet<IFoo> or how to use interfaces instead of POCOs with EF, hopefully this helps you. If you want to see the T4 templates, drop a comment and I’ll put up a gist on github.

One last thing to note is that I’ve only tried this with a handful of lambda expressions for querying, so it’s entirely possible that I’ll need to do more tweaking for some edge case scenarios. I’ve tried this with a handful of permutations of conjunction, disjunction, negation and numerical expressions, but what I’ve done is hardly exhaustive.

Happy abstracting!

Edit: Below are the gists:

  • Entities.Context.tt for the context. Modified to have DBSets of IWhatever instead of Whatever.
  • Entities.tt for the actual, rich domain objects that reside beneath the repositories. These guys have to implement IWhatever.
  • DTO.tt contains the actual interfaces. I modified this T4 template not to generate navigation properties at all because I don’t want that kind of rich relationship definition part of an interface between layers.
By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.