DaedTech

Stories about Software

By

You Aren’t God, So Don’t Talk Like Him

Queuing Up Some Rage

Imagine that you’re talking to an acquaintance, and you mention that blue is your favorite color. The response comes back:

Acquaintance: You’re completely wrong. Red is the best color.

How does the rest of this conversation go? If you’re sane…

Acquaintance: That’s wrong. Red is the best color.
You: Uh, okay…

This, in fact, is really the only response that avoids a terminally stupid and probably increasingly testy exchange. The only problem with this is that the sane approach is also perceived as something between admission of defeat and appeasement. You not fighting back might be perceived as weakness. So, what do you do? Do you launch back with this?

Acquaintance: That’s wrong. Red is the best color.
You: No, it is you who is wrong. You’d have to be an idiot to like red!

If so, how do you think this is going to go?

Acquaintance: That’s wrong. Red is the best color.
You: No, it is you who is wrong. You’d have to be an idiot to like red!
Acquaintance: Ah, well played. I’ve changed my mind.

Yeah, I don’t think that’s how it will go either. Probably, it will turn out more like this:

Acquaintance: That’s wrong. Red is the best color.
You: No, it is you who is wrong. You’d have to be an idiot to like red!
Acquaintance: You’re the idiot, you stupid blue fanboy!
You: Well, at least my favorite color isn’t the favorite color of serial killers and Satan!
Acquaintance: Go shove an ice-pick up your nose. I hope you die!

Well, okay, maybe it will take a little longer to get to that point, perhaps with some pseudo-intellectual comparisons to Hitler and subtle ad hominems greasing the skids of escalation. If you really want to see this progression in the wild, check the comments section of any tech article about an Apple product. But the point is that it won’t end well.

Looking back, what is the actual root cause of the contention? The fact that you like blue and your acquaintance likes red? That doesn’t seem like the sort of thing that normally gets the adrenaline pumping. Is it the fact that he told you that you were wrong? I think this cuts closer to the heart of the matter, but this ultimately isn’t really the problem, either. So what is?

Presenting Opinions as Facts

The heart of the issue here, I believe, is the invention of some arbitrary but apparently universal truth. In other words, the subtext of what your acquaintance is saying is, “There is a right answer to the favorite color question, and that right answer is my answer because it’s mine.” The place where the conversation goes off the rails is the place at which one of the participants declares himself to be the ultimate Clearinghouse of color quality. So, while the “you’re wrong” part may be obnoxious, and it may even be what grinds the listener’s teeth in the moment, it’s just a symptom of the actual problem: an assumption of objective authority over a purely subjective matter.

To drive the point home, consider a conversation with a friend or family member instead of a mere acquaintance. Consider that in this scenario the “you’re wrong” would probably be good-natured and said in jest. “Dude, you’re totally wrong–everyone knows red is the best color!” That would roll off your back, I imagine. The first time, anyway. And probably the second time. And the third through 20th times. But, sooner or later, I’m pretty sure that would start to wear on you. You’d think to yourself, “Is there any matter of opinion about which I’m not ‘wrong,’ as he puts it?”

In the example of favorite color and other things friends might discuss, this seems pretty obvious. Who would seriously think that there was an actual right answer to “What’s your favorite color?” But what about the aforementioned Apple products versus, say, Microsoft or Google products? What about the broader spectrum of consumer products, including deep dish versus thin crust pizza or American vs Japanese cars? Budweiser or Miller? Maybe an import or a microbrew? What about sports teams? Designated hitter or not? Soccer or football?

And what about technologies and programming languages and frameworks? Java versus .NET? Linux versus Windows? Webforms vs ASP MVC? What about finer granularity concerns? Are singletons a good idea or not? Do curly braces belong on the same line as a function definition or the next line? Layered or onion architecture? Butter side up or butter side down? (Okay, one of those might have been something from Dr Seuss.)

It’s All in the Phrasing

With all of these things I’ve listed, particularly the ones about programming and others like them, do you find yourself lapsing into declarations of objective truth when what you’re really doing is expressing an opinion? I bet you do. I know I do, from time to time. I think it’s human nature, or at the very least it’s an easy way to try to add additional validity to your take on things. But it’s also a logical fallacy (appeal to authority, with you as the authority, or, as I’ve seen it called, confusing fact with opinion.) It’s a fallacy wherein the speaker holds himself up as the arbiter of objective truth and his opinions up as facts. Whatever your religious beliefs may be, that is a role typically reserved for a deity. I’m pretty sure you’re not a deity, and I know that I’m not one, so perhaps we should all, together, make an effort to be clear if we’re stating facts (“two plus one is four”) or if we’re expressing beliefs or opinions (“Three is the absolute maximum number of parameters I like to see for a method”).

Think of how you you would react to the following phrases:

  • I like giant methods.
  • I believe there’s no need to limit the number of control flow statements you use.
  • I would have used a service locator there where you used direct dependency injection.
  • I prefer to use static methods and especially static state.
  • I wish there were more coupling between these modules.
  • I am of the opinion that unit testing isn’t that important.

You’re probably thinking “I disagree with those opinions.” But your hackles likely aren’t raised. Your face isn’t flushed, and your adrenaline isn’t pumping in anticipation of an argument against someone who just indicted your opinions and your way of doing things. You aren’t on the defensive. Instead, you’re probably ready to argue the merits of your case in an attempt to come to some mutual understanding, or, barring that, to “agree to disagree.”

Now think of how you’d react to these statements.

  • Reducing the size of your methods is a waste of time.
  • Case statements are better than polymorphism.
  • If you use dependency injection, you’re just wrong.
  • Code without static methods is bad.
  • The lack of coupling between these modules was a terrible decision.
  • Unit testing is a dumb fad.

How do you feel now? Are your hackles raised a little bit, even though you know I don’t believe these things? Where the language in the first section opened the door for discussion with provocative statements, the language in this section attempts to slam that door shut, not caring if your fingers are in the way. The first section states the speaker’s opinions, where the language in the second indicts the listener’s. Anyone wanting to foster a cooperative and pleasant environment would be well served to favor things stated in the fashion of the first set of statements. It may be tempting to make your opinions seem more powerful by passing them off as facts, but it really just puts people off.

Caveats

I want to mention two things as a matter of perspective here. The first is that it would be fairly easy to point out that I write a lot of blog posts and give them titles like, “Testable Code Is Better Code,” and, “You’re Doin’ It Wrong,” to say nothing of what I might say inside the posts. And while that’s true, I would offer the rationale that pretty much everything I might post on a blog that isn’t a simple documentation of process is going to be a matter of my opinion, so the “I think” kind of goes without saying here. I can assure you that I do my best in actual discussions with people to qualify and make it clear when I’m offering opinions. (Though, as previously mentioned, I’m sure I can improve in this department, as just about anyone can.)

The second caveat is that what I’m saying is intended to apply to matters of complexity that are naturally opinions by their nature. For instance, “It’s better to write unit tests” is necessarily a statement of opinion since qualifying words like “better” invite ambiguity. But if you were to study 100 projects and discover that the ones with unit tests averaged 20% fewer defects, this would simply be a matter of fact. I am not advocating downgrading facts to qualified, wishy-washy opinions. What I am advocating is that we all stop ‘upgrading’ our opinions to the level of fact.

By

Favor Outcomes Over Rules

Cargo Cults and Angry Monkeys

If you have never heard the term “Cargo Cult Programming“, it’s an it’s an excellent way to describe blindly following processes without understanding the theory behind them. The term originates from a story about aboriginal islanders during World War II. During the war, cargo planes regularly (and from their perspective, magically) arrived with supplies and food that benefited these islanders and then after the war the planes stopped coming. This apparently didn’t stop the aboriginals from building ad-hoc landing strips and the like in hopes of ‘summoning’ more planes with supplies. If that story isn’t to your liking, there is another one of questionable accuracy about monkeys and bananas.

I’ve made a fair number of posts lately that address subjects near and dear to programmers while still having broader reach and this is certainly another such subject. Doing things without understand why is generally the province of the incurious or the busy, but I think it’s worth generally forcing yourself to ask why you’re doing pretty much anything, particularly if you’re in a fairly cerebral line of work. As professionals, it behooves us to realize that we want supplies or that we want not to get sprayed rather than thinking that building landing strips and beating up monkeys are just things that we do and such is life.

In fact, I’d venture to say that we pride ourselves in this sort of thing as we advance in our careers. As programmers, we even help our users understand this concept. “I understand that you normally push the green button three times and hit the backspace key while pressing the mute button, but what are you actually trying to do when you do all that?” How often have you asked a user something like this? In essence you’re saying “forget the process for a minute — what’s the larger goal here?”

Do Unto Others

It then bears asking, if we pride ourselves on critical thinking and we seek to help users toward that end, why do we seem to encourage each other to dance for supplies and spray our team members with cold water? Why do we issue cryptic cargo cult orders to other programmers when we understand our own rationale? Let me give an example.

A few years back I was setting up a new machine for work on a project to which I was new, and the person guiding me through the setup told me that I needed to install KDiff, a spectacularly average comparison tool that even then hadn’t been revved or maintained in a few years. Now, I have nothing, per se, against KDiff, and the price is right, but this struck me as a very… specific… order. What difference did it make what I used for diff? I had used a paid version of Beyond Compare prior to that and always liked it, but when it comes to tooling I do enjoy poking around and so I didn’t mind trying a different tool.

I’m pickier about some things than others, so I just said “Bob’s Your Uncle,” installed KDiff and didn’t think about it again for a long time. It was actually many months later that I was sitting in on a code review for a developer who had just started on the project and found out the reasoning behind this cargo cult practice. The developer whose code was being reviewed casually opened BeyondCompare and one of the more tenured reviewers than me promptly freaked out that he wasn’t using KDiff. Bemused, I tuned in to hear the answer to a question that I’d completely forgotten about. The issue is that the project had a handful of XML files containing application meta-data that were source controlled and that some unnamed developer had, at some point, done something, presumably with some sort of diff tool, that had allegedly hosed one of these files by screwing up the text encoding. Nobody in the room, including the freaker-outer, knew who this was or exactly how or when it had happened, but nonetheless, it had been written in stone that from that day, Thou Shalt Use KDiff if you want to avoid punishment in the form of Biblical plagues.

This certainly wasn’t my only experience with such a thing. More recently, I overheard that the procedure for editing a particular file in source control was to grab the latest, edit it really fast and then check it in immediately. I was a bit perplexed by this until I learned that the goal was to avoid merge conflicts as this was a commonly edited file. I thought, why not just say in the first place that a lot of people edit this file and that merge conflicts are bad, in the same way that I had previously thought “why not just tell people that you’re worried about messed up encodings?”

And that line of questioning really drives to the heart of the matter here. Developers are generally quite skilled and pretty intelligent people and, more importantly, they tend to be be people that like to solve problems. So why not give them the benefit of the doubt when you’re working with them? Instead of giving your peers rote procedures to follow because they happened to work for you once, why not explain the problem and say “this is how I solved it, but if you have a better idea, I’m all ears!”

And you know what? They might. What if instead of forcing people to use KDiff, someone had written a validation script for the offending file that prevented checkins of a badly formed edit? What if instead of having flash edit-checkins of a file, the design of the application were altered to eliminate the contention and potential for error around that file? I suggest these things because I’m a fan of making the bad impossible. Are they the greatest ideas? Are they even practical in those situations? I honestly don’t know, but they might be approaches that have the advantage of placing fewer restrictions on developers or meaning one less thing to remember.

I would encourage you to trust your peers to grasp the bigger picture whenever possible (unless perhaps they’ve demonstrated that this trust isn’t warranted). As I say in the title, it seems like a good idea to favor describing desired outcomes over inventing rules to be memorized. With the former approach you free people you work with to be creative. With the latter approach, you constrain them and possibly even stress them out or frustrate them. And who knows, they may even surprise you with ideas and solutions that make your life easier.

By

How Software Groups Rot: Legacy of the Expert Beginner

Expert Beginner Recap

In my last post I introduced the term “Expert Beginner” to describe someone who has capped out in their learning at some sort of local maximum, convinced that the local is global. Expert Beginners are developers who do not understand enough of the big picture to understand that they aren’t actually experts. What I mean by this is that they have a narrow enough perspective to think that whatever they have been exposed to is the best and only way to do things; examples would include a C# developer who pooh-poohs Java without ever having tried it, or a MySQL DBA who dismisses the NoSQL movement as a passing fad. This isn’t to say that not liking a technology or not having worked with one makes someone an Expert Beginner, but rather the vaguely solipsistic mindset that “if it’s not in my toolchest or something I’ve had experience with, it’s not worth doing.”

Another characteristic of the Expert Beginner, however, is that they have some position of relative authority or influence within a software group. In my last post, I proposed the term Expert Beginner without explaining the rationale, planning to discuss that here. The Advanced Beginner is someone who is in the advanced stage of being a beginner, whereas the Expert Beginner descriptor is both literal and intentionally ironic; it’s literal in that someone with that much practice at being a beginner could be said to be an expert and ironic in that the “expert” title is generally self-applied in earnest or else applied by managers or peers who don’t know any better.

The iconic example might be the “tech guy” at a small, non-technical firm. He “knows computers” so as the company grew and evolved to have some mild IT needs, he became a programming dilettante out of necessity. In going from being a power user to being a developer, his skills exceeded his expectations, so he became confident in his limited, untrained abilities. Absent any other peers in the field, the only people there to evaluate his skills are himself and non-technical users who offer such lofty praises as “it seems to work, kind of, I think.” He is the one-eyed man in the valley of the blind and, in a very real and unfortunate sense, a local expert. This is the iconic example because it has the fewest barriers to Expert Beginnerism–success is simple, standards are low, actual experts are absent, competition is non-existent, and external interaction is not a given.

A Single Point of Rot…

So far I’ve talked a lot about the Expert Beginner–his emergence, his makeup, his mindset, and a relatively sympathetic explanation of the pseudo-rationality, or at least understandability, of his outlook. But how does this translate into support of my original thesis that Expert Beginners cause professional toxicity and degeneration of software groups? To explain that, I’m going to return to my bowling analogy, and please bear with me if the metaphor is a touch strained. For those of you who didn’t read the first post, you might want to give the second section of it a read because this is a lot to rehash here.

Let’s say that bowling alleys make their money by how well their bowlers bowl and that I live in a small town with a little, startup bowling alley. Not being able to find any work as a software developer, I try my hand bowling at the local alley. I don’t really know what I’m doing, and neither do they, but we both see me improving rapidly as I start bowling there, in spite of my goofy style. My average goes up, the bowling alley makes money, and life is good–there’s no arguing with profit and success!

Around about the time my score was topping 150 and the sky seemed the limit, the bowling alley decided that it was time to expand and to hire a couple of entry-level bowlers to work under my tutelage. On the day they arrived, I showed them how to hold the ball just like I hold it and how to walk just like I do. When they ask what the thumb and finger holes are for, I respond by saying, “don’t worry about those–we don’t use them here.” Eager to please, they listen to me and see their averages increase the way mine had, even as I’m starting to top out at around a 160 average.

As time goes by, most of them are content to do things my way. But a couple are ambitious and start to practice during their spare time. They read books and watch shows on bowling technique. One day, these ambitious bowlers come in and say, “the guys on TV put their fingers in the ball, and they get some really high averages–over 200!” They expect me to be as interested as they are in the prospect of improvement and are crestfallen when I respond with, “No, that’s just not how we do things here. I’ve been bowling for longer than you’ve been alive and I know what I’m doing… besides, you can’t believe everything you see on TV.”

And thus, quickly and decisively, I squash innovation for the group by reminding them that I’m in charge by virtue of having been at the bowling alley for longer than they have. This is a broadly accepted yet totally inadequate non sequitur that stops discussion without satisfying. At this point, half of the ambitious developers abandon their “fingers in the ball” approach while the other half meet after bowling at another alley and practice it together in semi-secret. After a while, their averages reach and surpass mine, and they assume that this development–this objective demonstration of the superiority of their approach–will result in a change in the way things are done. When it instead results in anger and lectures and claims that the scores were a fluke and I, too, once bowled a 205 that one time, they evaporate and leave the residue behind. They leave my dead-end, backward bowling alley for a place where people don’t favor demonstrably inferior approaches out of stubbornness.

The bowling alley loses its highest average bowlers not to another alley, but to an Expert Beginner.

…That Poisons the Whole

The bowlers who don’t leave learn two interesting lessons from this. The first lesson they learn is that if they wait their turn, they can wield unquestioned authority regardless of merit. The second lesson they learn is that it’s okay and even preferred to be mediocre at this alley. So when new bowlers are hired, in the interests of toeing the company line and waiting their turn, they participate in the inculcation of bad practices to the newbies the same way as was done to them. The Expect Beginner has, through his actions and example, created additional Expert Beginners and, in fact, created a culture of Expert Beginnerism.

The other interesting development that results comes in the acquisition process. As the Expert-Beginner-in-Chief, I’ve learned a pointed lesson. Since I don’t like being shown up by ambitious young upstarts, I begin to alter my recruitment process to look for mediocre “team players” that won’t threaten my position with their pie-in-the-sky “fingers in the ball” ideas. Now, I know what you’re thinking–doesn’t this level of awareness belie the premise of the Expert Beginner being unaware of the big picture? The answer is no. This hiring decision is more subconscious and rationalized than overt. It isn’t, “I won’t hire people that are better than me,” but, “those people just aren’t a good fit here with my ‘outside the box’ and ‘expert’ way of doing things.” And it may even be that I’m so ensconced in Expert Beginnerism that I confuse Competent/Expert level work with incompetent work because I don’t know any better. (The bowling analogy breaks down a bit here, but it might be on par with a “bowling interview” where I just watched the form of the interviewee’s throw and not the results, and concluded that the form of a 220 bowler was bad because it was different than my own.) And, in doing all this, I’m reinforcing the culture for everyone including my new Expert Beginner lieutenants.

And now the bowling alley is losing all of its potentially high average bowlers to a cabal of Expert Beginners. Also notice that Bruce Webster’s “Dead Sea Effect” is fully mature and realized at this point.

Back in the Real World

That’s all well and good for bowling and bowling alleys, but how is this comparable to real software development practices? Well, it’s relatively simple. Perhaps it’s a lack of automated testing. Giant methods/classes. Lots of copy and paste coding. Use of outdated or poor tooling. Process. It can be any number of things, but the common thread is that you have a person or people in positions of authority that have the culturally lethal combination of not knowing much; not knowing what they don’t know; and assuming that, due to their own expertise, anything they don’t know isn’t worth knowing. This is a toxic professional culture in that it will force talented or ambitious people either to leave or to conform to mediocrity.

You may think that this is largely a function of individual personalities, that departments become this way by having arrogant or pushy incompetents in charge, but I think it’s more subtle than that. These Expert Beginners may not have such personality defects at all. I think it’s a natural conclusion of insular environments, low expectations, and ongoing rewards for mediocre and/or unquantifiable performances. And think about the nature of our industry. How many outfits have you worked at where there is some sort of release party, even (or especially) when the release is over budget, buggy and behind schedule? How many outfits have you worked at that gave up on maintaining some unruly beast of an application in favor of a complete rewrite, only to repeat that cycle later? And the people involved in this receive accolades and promotions, which would be like promoting rocket makers for making rockets that looked functional but simply stopped and fell back to Earth after a few hundred feet. “Well, that didn’t work, Jones, but you learned a lot from it, so we’re promoting you to Principal Rocket Builder and having you lead version two, you rock star, you!” Is it any wonder that Jones starts to think of himself as King Midas?

As an industry, we get away with this because people have a substantially lower expectation of software than they do of rockets. I’m not saying this to complain or to suggest sweeping change but rather to explain how it’s easy for us to think that we’re further along in our skills acquisition than we actually are, based on both our own perception and external feedback.

Create a Culture of Acquisition instead of Stagnation

Having identified a group-(de)forming attitude that could most effectively be described as a form of hubris, I would like to propose some relatively simple steps to limit or prevent this sort of blight.

First of all, to prevent yourself from falling into the Expect Beginner trap, the most important thing to do is not to believe your own hype. Take pride in your own accomplishments as appropriate, but never consider your education complete or your opinion above questioning regardless of your title, your years of experience, your awards and accomplishments, or anything else that isn’t rational argumentation or evidence. Retaining a healthy degree of humility, constantly striving for improvement, and valuing objective metrics above subjective considerations will go a long way to preventing yourself from becoming an Expert Beginner.

In terms of preventing this phenomenon from corrupting a software group, here is a list of things that can help:

  1. Give team members as much creative freedom as possible to let them showcase their approaches (and remember that you learn more from failures than successes).
  2. Provide incentives or rewards for learning a new language, approach, framework, pattern, style, etc.
  3. Avoid ever using number of years in the field or with the company as a justification for favoring or accepting anyone’s argument as superior.
  4. Put policies in place that force external perspectives into the company (lunch-and-learns, monthly training, independent audits, etc).
  5. Whenever possible, resolve disputes/disagreements with objective measures rather than subjective considerations like seniority or democratic vote.
  6. Create a “culture of proof”–opinions don’t matter unless they’re supported with independent accounts, statistics, facts, etc.
  7. Do a periodic poll of employees, junior and senior, and ask them to list a few of their strengths and an equal number of things they know nothing about or would like to know more about. This is to deflate ahead of time an air of “know-it-all-ism” around anyone–especially tenured team members.

This list is more aimed at managers and leaders of teams, but it’s also possible to affect these changes as a simple team member. The only difference is that you may have to solicit help from management or persuade rather than enforce. Lead by example if possible. If none of that is possible, and it seems like a lost cause, I’d say head off for greener pastures. In general, it’s important to create or to have a culture in which “I don’t know” is an acceptable answer, even for the most senior, longest-tenured leader in the group, if you want to avoid Expert Beginner fueled group rot. After all, an earnest “I don’t know” is something Expert Beginners never say, and it is the fundamental difference between a person who is acquiring skill and a person who has decided that they already know enough. If your group isn’t improving, it’s rotting.

Series is continued here: “How Stagnation is Justified: Language of the Expert Beginner”

No Fields Found.

By

How Developers Stop Learning: Rise of the Expert Beginner

Beyond the Dead Sea: When Good Software Groups Go Bad

I recently posted what turned out to be a pretty popular post called “How to Keep Your Best Programmers,” in which I described what most skilled programmers tend to want in a job and why they leave if they don’t get it.

Today, I’d like to make a post that works toward a focus on the software group at an organization rather than on the individual journeys of developers as they move within or among organizations. This post became long enough as I was writing it that I felt I had to break it in at least two pieces. This is part one.

In the previous post I mentioned, I linked to Bruce Webster’s “Dead Sea Effect” post, which describes a trend whereby the most talented developers tend to be the most marketable and thus the ones most likely to leave for greener pastures when things go a little sour. On the other hand, the least talented developers are more likely to stay put since they’ll have a hard time convincing other companies to hire them.

This serves as important perspective for understanding why it’s common to find people with titles like “super-duper-senior-principal-fellow-architect-awesome-dude,” who make a lot of money and perhaps even wield a lot of authority but aren’t very good at what they do. But that perspective still focuses on the individual. It explains the group only if one assumes that a bad group is the result of a number of these individuals happening to work in the same place (or possibly that conditions are so bad that they drive everyone except these people away).

Dale will tell you what’s wrong with so-called professional ORMs.

I believe that there is a unique group dynamic that forms and causes the rot of software groups in a way that can’t be explained by bad external decisions causing the talented developers to evaporate. Make no mistake–I believe that Bruce’s Dead Sea Effect is both the catalyst for and the logical outcome of this dynamic, but I believe that some magic has to happen within the group to transmute external stupidities into internal and pervasive software group incompetence.

In the next post in this series, I’m going to describe the mechanism by which some software groups trend toward dysfunction and professional toxicity. In this post, I’m going to set the stage by describing how individuals opt into permanent mediocrity and reap rewards for doing so.

Learning to Bowl

Before I get to any of that, I’d like to treat you to the history of my bowling game. Yes, I’m serious.

I am a fairly athletic person. Growing up, I was always picked at least in the top 1/3rd or so of any people, for any sport or game that was being played, no matter what it was. I was a jack of all trades and master of none. This inspired in me a sort of mildly inappropriate feeling of entitlement to skill without a lot of effort, and so it went when I became a bowler.

Most people who bowl put a thumb and two fingers in the ball and carefully cultivate tossing the bowling ball in a pattern that causes the ball to start wide and hook into the middle. With no patience for learning that, I discovered I could do a pretty good job faking it by putting no fingers and thumbs in the ball and kind of twisting my elbow and chucking the ball down the lane.

It wasn’t pretty, but it worked.

It actually worked pretty well the more I bowled, and, when I started to play in an after work league for fun, my average really started to shoot up. I wasn’t the best in the league by any stretch–there were several bowlers, including a former manager of mine, who averaged between 170 and 200, but I rocketed up past 130, 140, and all the way into the 160 range within a few months of playing in the league.

Not too shabby.

But then a strange thing happened. I stopped improving. Right at about 160, I topped out.

I asked my old manager what I could do to get back on track with improvement, and he said something very interesting to me. Paraphrased, he said something like this:

There’s nothing you can do to improve as long as you keep bowling like that. You’ve maxed out. If you want to get better, you’re going to have to learn to bowl properly.

You need a different ball, a different style of throwing it, and you need to put your fingers in it like a big boy. And the worst part is that you’re going to get way worse before you get better, and it will be a good bit of time before you get back to and surpass your current average.

I resisted this for a while but got bored with my lack of improvement and stagnation (a personal trait of mine–I absolutely need to be working toward mastery or I go insane) and resigned myself to the harder course. I bought a bowling ball, had it custom drilled, and started bowling properly.

Ironically, I left that job almost immediately after doing that and have bowled probably eight times in the years since, but c’est la vie, I suppose. When I do go, I never have to rent bowling shoes or sift through the alley balls for ones that fit my fingers.

Dreyfus, Rapid Returns and Arrested Development

In 1980, a couple of brothers with the last name Dreyfus proposed a model of skill acquisition that has gone on to have a fair bit of influence on discussions about learning, process, and practice. Later they would go on to publish a book based on this paper and, in that book, they would refine the model a bit to its current form, as shown on wikipedia.

The model lists five phases of skill acquisition:

  1. Novice
  2. Advanced Beginner
  3. Competent
  4. Proficient
  5. Expert

There’s obviously a lot to it, since it takes an entire book to describe it, but the gist of it is that skill acquirers move from “dogmatic following of rules and lack of big picture” to “intuitive transcending of rules and complete understanding of big picture.”

All things being equal, one might assume that there is some sort of natural, linear advancement through these phases, like earning belts in karate or money in the corporate world. But in reality, it doesn’t shake out that way, due to both perception and attitude.

At the moment one starts acquiring a skill, one is completely incompetent, which triggers an initial period of frustration and being stymied while waiting for someone, like an instructor, to spoon-feed process steps to the acquirer (or else, as Dreyfus and Dreyfus put it, they “like a baby, pick it up by imitation and floundering”). After a relatively short phase of being a complete initiate, however, one reaches a point where the skill acquisition becomes possible as a solo activity via practice, and the renewed and invigorated acquirer begins to improve quite rapidly as he or she picks “low hanging fruit.”

Once all that fruit is picked, however, the unsustainably rapid pace of improvement levels off somewhat, and further proficiency becomes relatively difficult from there forward. I’ve created a graph depicting this (which actually took me an embarrassingly long time because I messed around with plotting a variant of the logistic 1/(1 + e^-x) function instead of drawing a line in Paint like a normal human being).

This is actually the exact path that my bowling game followed in my path from bowling incompetence to some degree of bowling competence. I rapidly improved to the point of competence and then completely leveled off. In my case, improvement hit a local maximum and then stopped altogether, as I was too busy to continue on my path as-is or to follow through with my retooling.

This is an example of what, for the purposes of this post, I will call “arrested development.” (I understand the overlap with a loaded psychology term, but forget that definition for our purposes here, please.) In the sense of skills acquisition, one generally realizes arrested development and remains at a static skill level due to one of two reasons: maxing out on aptitude or some kind willingness to cease meaningful improvement.

For the remainder of this post and this series, let’s discard the first possibility (since most professional programmers wouldn’t max out at or before bare minimum competence) and consider an interesting, specific instance of the second: voluntarily ceasing to improve because of a belief that expert status has been reached and thus further improvement is not possible..

This opting into indefinite mediocrity is the entry into an oblique phase in skills acquisition that I will call “Expert Beginner.”

The Expert Beginner

The Road to Expert... and Expert Beginner When you consider the Dreyfus model, you’ll notice that there is a trend over time from being heavily rules-oriented and having no understanding of the big picture to being extremely intuitive and fully grasping the big picture. The Advanced Beginner stage is the last one in which the skill acquirer has no understanding of the big picture.

As such, it’s the last phase in which the acquirer might confuse himself with an Expert. A Competent has too much of a handle on the big picture to confuse himself with an Expert: he knows what he doesn’t know. This isn’t true during the Advanced Beginner phase, since Advanced Beginners are on the “unskilled” end of the Dunning Kruger Effect and tend to epitomize the notion that, “if I don’t understand it, it must be easy.”

As such, Advanced Beginners can break one of two ways: they can move to Competent and start to grasp the big picture and their place in it, or they can ‘graduate’ to Expert Beginner by assuming that they’ve graduated to Expert.

This actually isn’t as immediately ridiculous as it sounds. Let’s go back to my erstwhile bowling career and consider what might have happened had I been the only or best bowler in the alley. I would have started out doing poorly and then quickly picked the low hanging fruit of skill acquisition to rapidly advance.

Dunning-Kruger notwithstanding, I might have rationally concluded that I had a pretty good aptitude for bowling as my skill level grew quickly. And I might also have concluded somewhat rationally (if rather arrogantly) that me leveling off indicated that I had reached the pinnacle of bowling skill. After all, I don’t see anyone around me that’s better than me, and there must be some point of mastery, so I guess I’m there.

The real shame of this is that a couple of inferences that aren’t entirely irrational lead me to a false feeling of achievement and then spur me on to opt out of further improvement. I go from my optimistic self-assessment to a logical fallacy as my bowling career continues: “I know that I’m doing it right because, as an expert, I’m pretty much doing everything right by definition.” (For you logical fallacy buffs, this is circular reasoning/begging the question).

Looking at the graphic above, you’ll notice that it depicts a state machine of the Dreyfus model as you would expect it. At each stage, one might either progress to the next one or remain in the current one (with the exception of Novice or Advanced Beginner who I feel can’t really remain at that phase without abandoning the activity). The difference is that I’ve added the Expert Beginner to the chart as well.

The Expert Beginner has nowhere to go because progression requires an understanding that he has a lot of work to do, and that is not a readily available conclusion. You’ll notice that the Expert Beginner is positioned slightly above Advanced Beginner but not on the level of Competence. This is because he is not competent enough to grasp the big picture and recognize the irony of his situation, but he is slightly more competent than the Advanced Beginner due mainly to, well, extensive practice being a Beginner.

If you’ve ever heard the aphorism about “ten years of experience or the same year of experience ten times,” the Expert Beginner is the epitome of the latter. The Expert Beginner has perfected the craft of bowling a 160 out of 300 possible points by doing exactly the same thing week in and week out with no significant deviations from routine or desire to experiment. This is because he believes that 160 is the best possible score by virtue of the fact that he scored it.

Expert Beginners in Software

Software is, unsurprisingly, not like bowling. In bowling, feedback cycles are on the order of minutes, whereas in software, feedback cycles tend to be on the order of months, if not years. And what I’m talking about with software is not the feedback cycle of compile or run or unit tests, which is minutes or seconds, but rather the project.

It’s during the full lifetime of a project that a developer gains experience writing code, source controlling it, modifying it, testing it, and living with previous design and architecture decisions during maintenance phases. With everything I’ve just described, a developer is lucky to have a first try of less than six months, which means that, after five years in the industry, maybe they have ten cracks at application development. (This is on average–some will be stuck on a single one this whole time while others will have dozens.)

What this means is that the rapid acquisition phase of a software developer–Advanced Beginnerism–will last for years rather than weeks. And during these years, the software developers are job-hopping and earning promotions, especially these days. As they breeze through rapid acquisition, so too do they breeze through titles like Software Engineer I and II and then maybe “Associate” and “Senior,” and perhaps eventually on up to “Lead” and “Architect” and “Principal.”

So while in the throes of Dunning-Kruger and Advanced Beginnerism, they’re being given expert-sounding titles and told that they’re “rock stars” and “ninjas” and whatever by recruiters–especially in today’s economy. The only thing stopping them from taking the natural step into the Expert Beginner stage is a combination of peer review and interaction with the development community at large.

But what happens when the Advanced Beginner doesn’t care enough to interact with the broader community and for whatever reason doesn’t have much interaction with peers? The Daily WTF is filled with such examples.

They fail even while convinced that the failure is everyone else’s fault, and the nature of the game is such that blaming others is easy and handy to relieve any cognitive dissonance. They come to the conclusion that they’ve quickly reached Expert status and there’s nowhere left to go. They’ve officially become Expert Beginners, and they’re ready to entrench themselves into some niche in an organization and collect a huge paycheck because no one around them, including them, realizes that they can do a lot better.

Until Next Time

And so we have chronicled the rise of the Expert Beginner: where they come from and why they stop progressing. In the next post in this series, I will explore the mechanics by which one or more Expert Beginners create a degenerative situation in which they actively cause festering and rot in the dynamics of groups that have talented members or could otherwise be healthy.

Next up: How Software Groups Rot: Legacy of the Expert Beginner

No Fields Found.

By

There is No Such Thing as Waterfall

Don’t try to follow the waterfall model. That’s impossible. Instead, only try to realize the truth: there is no waterfall.

Waterfall In Practice Today

I often see people discuss, argue and debate the best approach or type of approach to software development. Most commonly, this involves a discussion of the merits of iterative and/or agile development, versus the “more traditional waterfall approach.” What you don’t see nearly as commonly, but do see every now and then is how the whole “waterfall” approach is based on a pretty fundamental misunderstanding, wherein the man (Royce) who coined the term and created the iconic diagram of the model was holding it up as a strawman to say (paraphrased) “this is how to fail at writing software — what you should do instead is iterate.” Any number of agile proponents may point out things like this, and it isn’t too hard to make the case that the waterfall development methodology is flawed and can be problematic. But, I want to make the case that it doesn’t even actually exist.

I saw a fantastic tweet earlier from Energized Work that said “Waterfall is typically 6 months of ‘fun up front’, followed by ‘steady as she goes’ eventually ramping up to ‘ramming speed’.” This is perfect because it indicates the fundamental boom and bust, masochistic cycle of this approach. There is a “requirements phase” and a “design phase”, which both amount basically to “do nothing for a while.” This is actually pretty relaxing (although frustrating for ambitious and conscientious developers, as this is usually unstructured-unstructured time). After a few weeks or months or whatever of thumb-twiddling, development starts, and life is normal for a few weeks or months while the “chuck it over the wall” deadline is too far off for there to be any sense of how late or bad the software will turn out to be. Eventually, though, everyone starts to figure out that the answers to those questions are “very” and “very”, respectively, and the project kicks into a death march state and “rams” through the deadline over budget, under-featured, and behind schedule, eventually wheezing to some completion point that miraculously staves off lawyers and lawsuits for the time being.

This is so psychically exhausting to the team that the only possible option is 3 months of doing nothing, er, excuse me, requirements and design phase for the next project, to rest. After working 60 hour weeks and weekends for a few weeks or months, the developers on the team look forward to these “phases” where they come in at 10 AM, leave at 4 PM, and sit around writing “shall” a lot, drawing on whiteboards, and googling to see if UML has reached code generation Shangri La while they were imprisoned in their cubicles for the last few months. Only after this semi-vacation are they ready to start the whole chilling saga again (at least the ones that haven’t moved on to greener pastures).

Diving into the Waterfall

So, what actually happens during these phases, in a more detailed sense, and what right have I to be so dismissive of requirements and design phases as non-work? Well, I have experienced what I’m describing firsthand on any number of occasions, and found that most of my time is spent waiting and trying to invent useful things to do (if not supporting the previous release), but I realize that anecdotal evidence is not universally compelling. What I do consider compelling is that after these weeks or months of “work” you have exactly nothing that will ever be delivered to your end-users. Oh, you’ve spent several months planning, but when was the last time planning something was worked at anywhere near as much as actually doing something? When you were a high school or college kid and given class time to make “idea webs” and “outlines” for essays, how often was that time spent diligently working, and how often was that time spent planning what to do next weekend? It wasn’t until actual essay writing time that you stopped screwing around. And, while we like to think that we’ve grown up a lot, there is a very natural tendency to regress developmentally when confronted with weeks of time after which no real deliverable is expected. For more on this, see that ambitious side project you’ve really been meaning to get back into.

But the interesting part of this isn’t that people will tend to relax instead of “plan for months” but what happens when development actually starts. Development starts when the team “exits” the “design phase” on that magical day when the system is declared “fully designed” and coding can begin. In a way, it’s like Christmas. And the way it’s like Christmas is that the effect is completely ruined in the first minute that the children tear into the presents and start making a mess. The beautiful design and requirements become obsolete the second the first developer’s first finger touches the first key to add the first character to the first line of code. It’s inevitable.

During the “coding phase”, the developers constantly go to the architect/project manager/lead and say “what about when X happens — there’s nothing in here about that.” They are then given an answer and, if anyone has time, the various SDLC documents are dutifully updated accordingly. So, developers write code and expose issues, at which time requirements and design are revisited. These small cycles, iterations, if you will, continue throughout the development phase, and on into the testing phase, at which time they become more expensive, but still happen routinely. Now, those are just small things that were omitted in spite of months of designing under the assumption of prescience — for the big ones, something called a “change request” is required. This is the same thing, but with more emails and word documents and anger because it’s a bigger alteration and thus iteration. But, in either case, things happen, requirements change, design is revisited, code is altered.

Whoah. Let’s think about that. Once the coding starts, the requirements and design artifacts are routinely changed and updated, the code changes to reflect that, and then incremental testing is (hopefully) done. That doesn’t sound like a “waterfall” at all. That sounds pretty iterative. The only thing missing is involving the stakeholders. So, when you get right down to it, “Waterfall” is just dysfunctional iterative development where the first two months (or whatever) are spent screwing around before getting to work, where iterations are undertaken and approved internally without stakeholder feedback, and where delivery is generally late and over budget, probably by an amount in the neighborhood of the amount of time spent screwing around in the beginning.

The Take-Away

My point here isn’t to try to persuade anyone to alter their software development approach, but rather to try to clarify the discussion somewhat. What we call “waterfall” is really just a specifically awkward and inefficient iterative approach (the closest thing to an exception I can think of is big government projects where it actually is possible to freeze requirements for months or years, but then again, these fail at an absolutely incredible rate and will still be subject to the “oh yeah, we never considered that situation” changes, if not the external change requests). So there isn’t “iterative approach” and “waterfall approach”, but rather “iterative approach” and “iterative approach where you procrastinate and scramble at the end.” And, I don’t know about you, but I was on the wrong side of that fence enough times as a college kid that I have no taste left for it.