DaedTech

Stories about Software

By

Leading Software Teams of Humans

I’ve been working lately on a project where I handle the build and deployment processes, as well as supply code reviews and sort of serve as the steward of the code base and the person most responsible for it. Not too long ago, a junior developer delivered some changes to the code right before I had to roll out a deployment. I didn’t have time to review the changes or manually regression test with them in place (code behind and GUI, so the unit tests were of no use), so I rolled them back and did the deployment.

That night, I logged back on and reviewed the code that I had deleted prior to the build. I read through it carefully and made notes, and then I went back to the version currently checked in and implemented a simpler version of the fix checked in by the junior developer. That done, I drafted an email explaining why I had reverted the changes. I also included in it some praise for getting the defect fix right, my own solution requiring less code, and a few pointers as to what made my solution, in my opinion, preferable.

By the time I finished this and saved a draft to send out the next day, it was pretty late at night and I was tired. As I got ready for bed, I contemplated what had motivated me along this course of action when none the extra work I did was really required. Clearly I thought this course of action was important, but my motivations were mainly subconscious. I wanted to understand what guiding principles of leadership I might be abiding by here and hammer them out a bit. And I think I got it figured out.

  1. Justify and explain your decisions.
  2. Prove the merits of your approach–show, don’t tell.
  3. Let members of your team take risks, using you as a safety net.

Explain Yourself

As I discussed in a previous post, a leader/manager issuing orders “because I say so” is incrementally losing the respect of subordinates/team members. And that’s exactly what’s going on if you simply make decisions without getting input or explaining the reasoning after the fact–as would have been the case in my situation had I not sent the email. What you’re saying to people when you unilaterally take action isn’t really clear because it’s cryptic and terse. So they’re left to interpret for themselves. And interpret they will.

They’ll wonder if maybe you’re just a jerk or paranoid. They’ll assume that you think they’re not important enough to bother with an explanation. Perhaps they’ll surmise that you think they’re too stupid to understand the explanation. They’ll wonder if they did something wrong or if they could improve somehow.

And really, that last one is the biggest bummer of all. If you’re a team lead and you need to overturn a decision or go in a different direction for a valid reason, that’s a very teachable moment. If you sit with them and help them understand the rationale for your decision, you’re empowering them to mimic your decision-making next time around and improve. If you just undo all their work with no fanfare, they learn nothing except that fate (and the team lead) is cruel. Seriously–that’s the message. And there are few things more demoralizing to an engineer than feeling as though the feedback mechanism is capricious, unpredictable, and arbitrary.

Show, Don’t Tell

“Because I said so” costs you respect in another important way on a technical team, as well. Specifically, it threatens to undermine your techie cred. You may think you’re Guru and you may even be Guru, but if you don’t occasionally offer demonstrations, the team members will start to think that you’re just a loud-mouthed armchair quarterback suited only for criticism, yelling, and second-guessing.

ArmchairQuarterback

If you lead a team and you don’t know what you’re doing technically, you’ll obviously lose them. But if you’re leading a team and you have a cloud architecture “do as I say” approach at all times, the outcome will differ only in that you’ll lose them more slowly. You should do something to demonstrate your technical chops to them. This will go a long way. Rands does an excellent job of explaining this (and I pay a subtle homage to his ideas with my post’s title). You have to build something. You have to show them that you’re the leader for a reason other than because you’ve hung around the company for a while or you have friends in high places. Developers usually crave meritocracy, and flashing some chops is very likely to make them feel better about following your lead. And you get bonus points if the flashing of said chops saves them work/teaches them something/shows them something cool.

You Are the Safety Net

I worked in a group once that was permeated by fear. It was a cultural thing born out of the fact that the loudest group members were the most paranoid and their skepticism flirted dangerously with cynicism. Nobody could be trusted with changing the code base, so code reviews were mandatory, often angry affairs that treated would-be committers as dastardly saboteurs out to commit acts of digital terrorism. The result of this Orwellian vibe was that progress and change happened inordinately slowly. Group members were much more terrified of doing the wrong thing than of missing deadlines or failing to implement features, so progress crawled.

I could also tell that there were intense feelings of stress for some people working this way. The feeling that you have to constantly avoid making mistakes is crippling. It is the province of a culture of learned helplessness–get yelled at enough for making code changes that aren’t good enough and you’ll just resort to asking the yellers to sit with you and show you exactly what to do every time. You’ll take no risks, run no experiments, and come to grips with no failures. You won’t improve–you’ll operate as a grunt and collect a paycheck.

As a team leader, this is a toxic environment if performance is important to you (and it might not be–some people just like being boss and having the luxury of not being judged on lack of performance). A stagnant, timid team isn’t going to have the autonomy necessary to handle the unexpected and to dream up new and innovative ways of doing things. In order for those things to happen, your team has to be comfortable and confident that mistakes and failed experiments will be viewed as valuable lessons learned rather than staging points for demerits, blame, and loss of political capital.

SafetyNet

So if someone on your team checks in code that breaks the build or slips into QA with problems or what-have-you, resist at all costs the impulse to get worked up, fire off angry emails, publicly shame them, or anything else like that. If you’re angry or annoyed, take some deep breaths and settle down. Now, go fix what they did. Shield them from consequences of their good-faith actions that might make them gun-shy in the future. Oh, don’t get me wrong–you should certainly sit them down later and explain to them what they did, what the problem was, and what fixing it involved. But whatever you do, don’t do it in such a way that makes them scared of coding, changing things, and tinkering.

Will this approach mean some late nights for you? Yep. Does it suck to have to stay late cleaning up someone else’s mess? Sure it does. But you’re the lead. Good leaders work hard and make the people around them better. They offer enthusiasm and encouragement and a pleasant environment in which people can learn and grow. They don’t punch out at five like clockwork and throw anyone that threatens their departure time under the bus. Leadership isn’t a matter of entitlement–it’s a position of responsibility.

By

Don’t Take My Advice

A Brief History of My Teeth

When it comes to teeth, I hit the genetic jackpot. I’m pretty sure that, instead of enamel, my teeth are coated in some sort of thin layer of stone. I’ve never had a cavity and my teeth in general seem impervious to the variety of maladies that afflict other people’s teeth. So this morning, when I was at the dentist, the visit proceeded unremarkably. The hygienist scraped, prodded, and fluoridized my teeth with a quiet efficiency, leaving me free to do little but listen to the sounds of the dentist’s office.

HygienistThose sounds are about as far from interesting as sounds get: the occasional hum of the HVAC, people a few rooms over exchanging obligatory inanities about the weather, coughs, etc. But for a brief period of time, the hygienist in the next station over took to scolding her patient like a child (he wasn’t). She ominously assured him that his lack of flossing would catch up to him someday, though I quietly wondered which day that would be, since he was pushing retirement age. I empathized with the poor man because I too had been seen to by that hygienist during previous visits.

The first time I encountered her a couple of years ago, she asked me if I drank much soda, to which I replied that I obviously did since my employer stocked the fridge with it for free. I don’t drink coffee, so if I want to partake in the world’s most socially acceptable upper to combat grogginess, it’s Diet Pepsi or Mountain Dew for me. She tsked at me, told me a story of some teenager whose teeth fell out because he had a soda fountain in his basement, and then handed me a brochure with pictures of people’s mouths who were apparently ravaged by soda to such a state where I imagine that the only possible remedy was a complete head transplant. My 31 year steak of complete imperviousness to cavities in spite of drinking soda was now suddenly going to end as my teeth fell inexorably from my head. That is, unless I repented my evil ways and started drinking water and occasionally tea.

I agreed to heed her advice in the way that one generally does when confronted with some sort of pushy nutjob on a mission–I pretended to humor her so that she would leave me alone. When I got home, I threw out the Pamphlet of Dental Doom and didn’t think about her for the next six months until I came back to the dentist. The first thing that she asked when I came in was whether I was still drinking a lot of soda or not. I was so dumbfounded that I didn’t have the presence of mind to lie my way out of this confrontation. I couldn’t decide whether it’d be creepier if she remembered me from six months ago and remembered that I drank a lot of soda or if she had made herself some kind of note in my file. Either way, I was in trouble. The previous time she had been looking to save the sinful, but now she was just pissed. Like the man I silently commiserated with this morning, she proceeded to spin spiteful tales of the world of pain coming my way.

Freud might have some interesting ideas about a person that likes to stick her fingers in other people’s mouths while trying to scare them, and I don’t doubt that there’s a whole host of ground for her to cover on the proverbial therapist couch, but I’ll speak to the low hanging fruit. She didn’t like that I ignored her advice. She is an expert and I am an amateur. She used her expertise to make a recommendation to me, and I promptly blew it off, which is a subtle slap in the face should an expert choose to view it that way.

It Isn’t Personal

It’s pretty natural to feel slighted when you offer expert advice and the recipient ignores it. This is especially true when the advice was solicited. I recall going along once with my girlfriend to help her pick out a computer and feeling irrationally irritated when we got to the store and she saw one that she liked immediately, losing complete interest in my interpretation of the finer points of processor architectures and motherboard wiring schemes. I can also recall it happening at times professionally when people solicit advice about programming, architecture, testing, tooling, etc. I’m happy–thrilled–to provide this advice. How dare anyone not take it.

But while it’s often hard to reason your way out of your feelings, it’s a matter of good discipline to reason past irrational reactions. As such, I strive not to take it personally when my advice, even when solicited, goes unheeded. It’s not personal, and I would argue it’s probably a good sign that the members of your team may be nurturing budding self-sufficiency.

Let’s consider three possible cases of what happens when a tech lead or person with more experience offers advice. In the first case, the recipient attempts to heed the advice but fails due to incompetence. In the second, the recipient takes the advice and succeeds with it. In the third and most interesting case, the recipient rejects the advice and does something else instead. Notice that I don’t subdivide this case into success and failure. I honestly don’t think it matters in the long term.

In the first case, you’re either dealing with someone temporarily out of their depth or generally incompetent, who might be considered an outlier on the lower end of the spectrum. The broad middle is populated with the second case people who take marching orders well enough and are content to do just that. The third group also consists of outliers, but often high-achieving ones. Why do I say that? Well, because this group is seeking data points rather than instructions. They want to know how an expert would handle the situation, not so that they can copy the expert necessarily, but to get an idea. Members of this group generally want to blaze their own trails, though they may at times behave like the second group for expediency.

But this third group consists of tomorrow’s experts. It doesn’t matter if they succeed or fail in the moment because, hey, success is success, but you can be very sure they’ll learn from any failures and won’t repeat their mistakes. They’re learning lessons by fire and experimentation that the middle-of-the-roaders learn as cargo cult practice. And they’re not dismissing your advice to offend you–they earnestly want to understand and assimilate your expertise–but rather to learn from you.

So when this happens to you as a senior team member/architect/lead/etc., try to fight the urge to be miffed or offended and exercise some patience. Give them some rope and see what they do–they can always be reigned in later if they aren’t doing well, but it’s hard to let the rope out if you’ve extinguished their experimental and creative spirit. The last thing you want to be is some woman in a dentist’s office, getting irrationally angry that adults aren’t properly scared of the Cavity Creeps.

By

The Developer Incentive Snakepit

Getting Incentives Wrong

A while ago I was talking to a friend of mine, and he told me that developers in his shop get monetary bonuses for absorbing additional scope in a “waterfall” project. (I use quotes because I don’t think that “waterfall” is actually a thing — I think it’s just collective procrastination followed by iterative development) . That sounds reasonable and harmless at first. Developers get paid more money if they are able to adapt successfully to late-breaking changes without pushing back the delivery date. Fair enough…. or is it?

Image credit to Renaud d’Avout d’Auerstaedt via wikimedia commons.

Let’s digress for a moment.  In colonial India, the ruling British got tired of all of the cobras hanging around and being angry and poisonous and whatnot, so they concocted a scheme to address this problem. Reasoning that more dead cobras meant fewer living cobras, they began to offer a reward/bounty for any cobra corpses brought in to them by locals. As some dead cobras started coming in, circumstances initially improved, but after a while the number of dead cobras being brought in exploded even though the number of living cobras actually seemed to increase a little as well.  How is this possible?  The British discovered that Indian locals were (cleverly) breeding cobras that they could kill and thus cash in on the reward. Incensed, the British immediately discontinued the program. The cobra breeders, who had no more use for the things, promptly discarded them, resulting in an enormous increase in the local, wild cobra population. This is an iconic example of what has come to be called “The Law of Unintended Consequences“, which is a catch-all for describing situations where an incentive-based plan has an effect other than the desired outcome, be it oblique or directly counter. The Cobra Effect is an example of the latter.

Going back to the “money for changes absorbed” incentive plan, let’s consider if there might be the potential for “cobra breeders” to start gaming the system or perhaps less cynical, subconscious games that might go on. Here are some that I can think of:

  1. Change requests arise when the actual requirements differ from those defined initially, so developers have a vested interest in initial requirements being wrong or incomplete.
  2. Change requests will happen if the marketing/UX/PM/etc stakeholders don’t communicate requirements clearly to developers, so developers have a vested interest in avoiding these stakeholders.
  3. Things reported as bugs/defects/issues require changes to the software, which developers have to fix for free, but if those same items were, for some reason, re-classified as change requests, developers would make more money.
  4. The concept of “change request” doesn’t exist in agile development methodologies since any requirement is considered a change request, and thus developers would lose money if a different development methodology were adopted.

So now, putting our more cynical hats on, let’s consider what kind of outcomes these rational interests will probably lead to:

  1. Developers are actively obstructionist during the “requirements phase”.
  2. Developers avoid or are outright hostile toward their stakeholders.
  3. Developers battle endlessly with QA and project management, refusing to accept any responsibility for problems and saying things like “there was nothing in the functional spec that said it shouldn’t crash when you click that button!”
  4. Developers actively resist changes that would benefit the business, favoring rigidity over flexibility.

Think of the irony of item (4) in this scenario.  The ostensible goal of this whole process is to reward the development group for being more flexible when it comes to meeting the business demands.  And yet when incentives are created with the intention of promoting that flexibility, they actually have the effect of reducing it.  The managers of this department are the colonial British and the cash for absorbed change requests is the cash for dead cobras.  By rewarding the devs for dead cobras instead of fewer cobras, you wind up with more cobras; the developers’ goal isn’t more flexibility but more satisfied change requests, and the best way to achieve this goal is to generate software that needs a lot of changes as far as the business is concerned.  It’s an example of the kind of gerrymandering and sandbagging that I alluded to in an earlier post and it makes me wonder how many apparently absurd developer behaviors might be caused by nonsensical or misguided incentive structures triggering developers to game the system.

So what would be a better approach?  I mean with these Cobra Effect situations, hindsight is going to be 20/20.  It’s easy for us to say that it’s stupid to pay people for dead cobras, and it was only after ruminating on the idea for a while that the relationship between certain difficulties of the group in question and the incentive structure occurred to me.  These ideas seem generally reasonable on their faces and people who second guess them might be accused of arm-chair quarterbacking.

Get Rid of Cobras with Better Incentives

I would say that the most important thing is to align incentives as closely as possible with goals and to avoid rewarding process adherence.  For example, with this change request absorbed incentive, what is the actual goal, and what is being rewarded?  The actual reward is easy, particularly if you consider it obtusely (and you should): developers are rewarded when actual requirements are satisfied on time in spite of not matching the original requirements.  There’s nothing in there about flexibility or business needs — just two simple criteria: bad original requirements and on-time shipping.  I think there’s value in stating the incentive simply (obtusely) because it tends to strip out our natural, mile-a-minute deductions.  Try it on the British Empire with its cobra problem:  more cobra corpses means more money.  There’s nothing that says the cobra corpses have to be local or that the number of cobras in the area has to decrease.  That’s the incentive provider being cute and clever and reasoning that one is the logical outcome of the other.

Flexibility image credit to “RDECOM” via wikimedia commons.

Going back to the bad software incentives, the important thing is to pull back and consider larger goals.  Why would a company want change requests absorbed?  Probably because absorbed change requests (while still meeting a schedule) are a symptom (not a logical outcome, mind you) of flexibility.  Aha, so flexibility is the goal, right?  Well, no, I would argue.  Flexibility is a means rather than an end.  I mean, outside of a circus contortionist exhibit, nobody gets paid simply to be flexible.  Flexibility is a symptom of another trend, and I suspect that’s where we’ll find our answer.

Why would a company want to be flexible when it comes to putting out software?  Maybe they have a requirement from the marketing department stating that they want to cut a sort of hip, latest-and-greatest feel, so the GUI has to constantly be updated with whatever latest user experience/design trends are coming from Apple or whoever is currently the Georgio Armani of application design.  Maybe they’re starting on a release now and they know their competition is coming out with something in 3 months that they’re going to want to mimic in the next release, but they don’t yet know what that thing is.  Maybe they just haven’t had any resources available to flesh out more than a few wireframes for a few screens but don’t want all progress halted, “waterfall” style, until every last detail of every last screen is hammered out in IRS Tax Code-like detail.

Aha!  Now we’re talking about actual business goals rather than slices of process that someone thinks will help the goals be achieved.  Why not reward the developers (and QA, marketing, UX, etc as well) for those business goals being met rather than for adherence to some smaller process?  Sadly, I think the answer is a command and control style of hierarchical management that often seeks to justify positions with fiefdoms of responsibility and opacity.  In other words, many organizations will have a CEO state these business goals and then hand down a mandate of “software should be flexible” to some VP, who in turn hands down a mandate of “bonuses for change requests absorbed” to the manager of the software group or some such thing.  It is vital to resist that structure as much as possible since providing an incentive structure divorced from broader goals practically ensures that the prospective recipients of the structure care nothing whatsoever for anything other than gaming the system in their own favor.  And in some cases, such as our case, this leads to open and predictable conflicts with other groups that have (possibly directly contradicting) incentive structures of their own.  As an extreme example, imagine a tech group where the QA team gets money for each bug they find and the developers lose money for the same.  A good way to ensure quality… or a good way to ensure fistfights in your halls?

Take-Away

Why keep things so simple and not fan out the deductive thinking about incentives, particularly in the software world?  Well, software people by nature are knowledge workers that are paid to use well-developed, creative problem-solving skills.  Incentives that work on assembly line workers such as “more money for more holes drilled per hour” are more likely to backfire when applied to people who make their living inventing ways to automate, enhance and optimize processes.  Software people will become quickly adept at gaming your system because it’s what you pay them to do.  If you reward them for process, they’ll automate to maximize the reward, whether or not it’s good for the business.  But if you reward them for goals met, they will apply their acute problem solving skills to devising the best process for solving the problem — quite possibly better than the one you advise them to follow.

If you’re in a position to introduce incentives, think carefully about the incentives that you introduce and how closely they mirror your actual goals.  Resist the impulse to reward people for following some process that you assume is the best one.  Resist the impulse to get clever and think “If A, then B, then C, then D, so I’ll reward people for D in order to get A.”  That’s the sort of thinking that leads to people on your team dropping cobras in your lunch bag.  Metaphorical cobras.  And sometimes real cobras.  If you work with murderers.

By

Because I Said So Costs You Respect

Do you remember being a kid, old enough to think somewhat deductively and logically, but not old enough really to understand how the world works? Do you remember putting together some kind of eloquent argument about why you should be able to sleep at a friend’s house or stay out later than normal, perfecting your reasoning, presentation and polish only to be rebuffed? Do you remember then having a long back and forth, asking why to everything, only to be smacked in the face with the ultimate in trump cards: “because I’m your parent and I say so?” Yeah… me too. Sucked, didn’t it?

There are few things in life more frustrating than pouring a good amount of effort into something only to have it shot down without any kind of satisfying explanation of the rationale. For children this tends to be unfortunate for self interested reasons: “I want to do X in the immediate future and I can’t.”  But as you get older and are motivated by more complex and nuanced concerns, these rejections get more befuddling and harder to understand.  A child making a case for why he should own a Red Rider BB Gun will understand on some level the parental objection “it’s dangerous” and that the parental objection arises out of concern for his welfare. So when the “enough — I’m your parent and I say so” comes up it has the backing of some kind of implied reasoning and concern.  An adult making a case that adopting accounting software instead of doing things by hand would be a money saving investment will have a harder time understanding  an answer of “no, we aren’t going to do that because I’m your boss and I say so.”

This is hard for adults to understand because we are sophisticated and interpersonally developed enough to understand when our goals align with those of others or of organizations. In other words, we reasonably expect an answer of “no” to the question “can I go on vacation for 8 months and get paid?” because this personal goal is completely out of whack with others and with an organization’s. So even if the explanation or reasoning aren’t satisfying, we get it. The “because I said so” is actually unnecessary since the answer of “that makes no sense for anyone but you” is perfectly reasonable. But when goals align and it is the means, rather than the ends, that differ, we start to have more difficulty accepting when people pull rank.

I remember some time back being asked to participate in generating extra documentation for an already documentation-heavy process. The developers on the project were taking requirements documents as drawn up by analysts and creating “requirements analysis” documents, and the proposal was to add more flavors of requirements documents to this. So instead of the analysts and developers each creating their own single word document filled with paragraph narratives of requirements, they were now being asked to collaborate on several each with each subsequent version adding more detail to the document. So as a developer, I might write my first requirements document in very vague prose, to be followed by a series of additional documents, each more detailed than the last.

I objected to this proposal (and really even to the original proposal). What I wanted to do was capture the requirements as data items in a database rather than a series of documents filled with prose. And I didn’t like the idea of having several documents with each one being a “more fleshed out” version of the last document — there’s a solution for that and it’s called “version control”. But when I raised these objections to the decision makers on the project, I was rebuffed. If you’re curious as to what the rationale for favoring the approach I’ve described over the one I suggest, I am as well, even to this day. You see, I never received any explanation other than a vague “well, it might not be the greatest, but we’re just going to go with it.” This explanation neither explained the benefit of the proposed approach nor any downside to my approach. Instead, I was a kid again, hearing “because I’m your parent and I say so.” But I wasn’t a little kid asking to stay out late — I was an adult with a different idea for how to achieve the same productivity and effectiveness goals as everyone else.

In the end, I generated the documents I was required to generate. It wasn’t the end of the world, though I never did see it as anything other than a waste of time. As people collaborating toward a larger goal, it will often be the case that we have to do things we don’t like, agree with or approve of, and it might even be the case that we’re offered no explanation at all for these things. That is to be expected in adult life, but I would argue that it should be minimized because it chips away at morale and satisfaction with one’s situation.

Once, twice, or every now and then, most people will let a “just do it because I say so” roll off their back. Some people might let many more than that slip by, while others might get upset earlier in the process. But pretty much everyone is going to have a limit with this kind of thing. Pretty much everyone will have a threshold of “because I say so” responses beyond which they will check out, tune out, leave, blow up, or generally lose it with that in some form. So I strongly recommend avoiding the tempting practice to “pull rank” and justify decisions with “because it’s my project” or “because I’ve decided that.” It’s sometimes nice not to have to justify your decisions — especially to someone with less experience than you and when you’re in a hurry — but the practice of defending your rationale keeps you sharp and on your toes, and it earns the respect of others. “Because I say so” is a well that you can only go to so many times before it dries up and leaves a desert of respect and loyalty. Don’t treat your coworkers like children when they want to know “why” and when they have legitimate questions — they deserve better than “because I’m in charge and I say so.”

By

You Aren’t God, So Don’t Talk Like Him

Queuing Up Some Rage

Imagine that you’re talking to an acquaintance, and you mention that blue is your favorite color. The response comes back:

Acquaintance: You’re completely wrong. Red is the best color.

How does the rest of this conversation go? If you’re sane…

Acquaintance: That’s wrong. Red is the best color.
You: Uh, okay…

This, in fact, is really the only response that avoids a terminally stupid and probably increasingly testy exchange. The only problem with this is that the sane approach is also perceived as something between admission of defeat and appeasement. You not fighting back might be perceived as weakness. So, what do you do? Do you launch back with this?

Acquaintance: That’s wrong. Red is the best color.
You: No, it is you who is wrong. You’d have to be an idiot to like red!

If so, how do you think this is going to go?

Acquaintance: That’s wrong. Red is the best color.
You: No, it is you who is wrong. You’d have to be an idiot to like red!
Acquaintance: Ah, well played. I’ve changed my mind.

Yeah, I don’t think that’s how it will go either. Probably, it will turn out more like this:

Acquaintance: That’s wrong. Red is the best color.
You: No, it is you who is wrong. You’d have to be an idiot to like red!
Acquaintance: You’re the idiot, you stupid blue fanboy!
You: Well, at least my favorite color isn’t the favorite color of serial killers and Satan!
Acquaintance: Go shove an ice-pick up your nose. I hope you die!

Well, okay, maybe it will take a little longer to get to that point, perhaps with some pseudo-intellectual comparisons to Hitler and subtle ad hominems greasing the skids of escalation. If you really want to see this progression in the wild, check the comments section of any tech article about an Apple product. But the point is that it won’t end well.

Looking back, what is the actual root cause of the contention? The fact that you like blue and your acquaintance likes red? That doesn’t seem like the sort of thing that normally gets the adrenaline pumping. Is it the fact that he told you that you were wrong? I think this cuts closer to the heart of the matter, but this ultimately isn’t really the problem, either. So what is?

Presenting Opinions as Facts

The heart of the issue here, I believe, is the invention of some arbitrary but apparently universal truth. In other words, the subtext of what your acquaintance is saying is, “There is a right answer to the favorite color question, and that right answer is my answer because it’s mine.” The place where the conversation goes off the rails is the place at which one of the participants declares himself to be the ultimate Clearinghouse of color quality. So, while the “you’re wrong” part may be obnoxious, and it may even be what grinds the listener’s teeth in the moment, it’s just a symptom of the actual problem: an assumption of objective authority over a purely subjective matter.

To drive the point home, consider a conversation with a friend or family member instead of a mere acquaintance. Consider that in this scenario the “you’re wrong” would probably be good-natured and said in jest. “Dude, you’re totally wrong–everyone knows red is the best color!” That would roll off your back, I imagine. The first time, anyway. And probably the second time. And the third through 20th times. But, sooner or later, I’m pretty sure that would start to wear on you. You’d think to yourself, “Is there any matter of opinion about which I’m not ‘wrong,’ as he puts it?”

In the example of favorite color and other things friends might discuss, this seems pretty obvious. Who would seriously think that there was an actual right answer to “What’s your favorite color?” But what about the aforementioned Apple products versus, say, Microsoft or Google products? What about the broader spectrum of consumer products, including deep dish versus thin crust pizza or American vs Japanese cars? Budweiser or Miller? Maybe an import or a microbrew? What about sports teams? Designated hitter or not? Soccer or football?

And what about technologies and programming languages and frameworks? Java versus .NET? Linux versus Windows? Webforms vs ASP MVC? What about finer granularity concerns? Are singletons a good idea or not? Do curly braces belong on the same line as a function definition or the next line? Layered or onion architecture? Butter side up or butter side down? (Okay, one of those might have been something from Dr Seuss.)

It’s All in the Phrasing

With all of these things I’ve listed, particularly the ones about programming and others like them, do you find yourself lapsing into declarations of objective truth when what you’re really doing is expressing an opinion? I bet you do. I know I do, from time to time. I think it’s human nature, or at the very least it’s an easy way to try to add additional validity to your take on things. But it’s also a logical fallacy (appeal to authority, with you as the authority, or, as I’ve seen it called, confusing fact with opinion.) It’s a fallacy wherein the speaker holds himself up as the arbiter of objective truth and his opinions up as facts. Whatever your religious beliefs may be, that is a role typically reserved for a deity. I’m pretty sure you’re not a deity, and I know that I’m not one, so perhaps we should all, together, make an effort to be clear if we’re stating facts (“two plus one is four”) or if we’re expressing beliefs or opinions (“Three is the absolute maximum number of parameters I like to see for a method”).

Think of how you you would react to the following phrases:

  • I like giant methods.
  • I believe there’s no need to limit the number of control flow statements you use.
  • I would have used a service locator there where you used direct dependency injection.
  • I prefer to use static methods and especially static state.
  • I wish there were more coupling between these modules.
  • I am of the opinion that unit testing isn’t that important.

You’re probably thinking “I disagree with those opinions.” But your hackles likely aren’t raised. Your face isn’t flushed, and your adrenaline isn’t pumping in anticipation of an argument against someone who just indicted your opinions and your way of doing things. You aren’t on the defensive. Instead, you’re probably ready to argue the merits of your case in an attempt to come to some mutual understanding, or, barring that, to “agree to disagree.”

Now think of how you’d react to these statements.

  • Reducing the size of your methods is a waste of time.
  • Case statements are better than polymorphism.
  • If you use dependency injection, you’re just wrong.
  • Code without static methods is bad.
  • The lack of coupling between these modules was a terrible decision.
  • Unit testing is a dumb fad.

How do you feel now? Are your hackles raised a little bit, even though you know I don’t believe these things? Where the language in the first section opened the door for discussion with provocative statements, the language in this section attempts to slam that door shut, not caring if your fingers are in the way. The first section states the speaker’s opinions, where the language in the second indicts the listener’s. Anyone wanting to foster a cooperative and pleasant environment would be well served to favor things stated in the fashion of the first set of statements. It may be tempting to make your opinions seem more powerful by passing them off as facts, but it really just puts people off.

Caveats

I want to mention two things as a matter of perspective here. The first is that it would be fairly easy to point out that I write a lot of blog posts and give them titles like, “Testable Code Is Better Code,” and, “You’re Doin’ It Wrong,” to say nothing of what I might say inside the posts. And while that’s true, I would offer the rationale that pretty much everything I might post on a blog that isn’t a simple documentation of process is going to be a matter of my opinion, so the “I think” kind of goes without saying here. I can assure you that I do my best in actual discussions with people to qualify and make it clear when I’m offering opinions. (Though, as previously mentioned, I’m sure I can improve in this department, as just about anyone can.)

The second caveat is that what I’m saying is intended to apply to matters of complexity that are naturally opinions by their nature. For instance, “It’s better to write unit tests” is necessarily a statement of opinion since qualifying words like “better” invite ambiguity. But if you were to study 100 projects and discover that the ones with unit tests averaged 20% fewer defects, this would simply be a matter of fact. I am not advocating downgrading facts to qualified, wishy-washy opinions. What I am advocating is that we all stop ‘upgrading’ our opinions to the level of fact.