DaedTech

Stories about Software

By

What Drives Waterfall Projects?

To start off the week, I have a satirical post about projects developed using the waterfall ‘methodology.’ (To understand the quotes, please see my post on why I don’t think waterfall is actually a methodology at all). I figured that since groups that use agile approaches and industry best practices have a whole set of xDD acronyms, such as TDD, BDD, and DDD, waterfall deserved a few of its own. So keep in mind that while this post is intended to be funny, I think there is a bit of relevant commentary to it.

Steinbeck-Driven Development (SDD)

For those of you who’ve never had the pleasure to read John Steinbeck’s “Of Mice and Men,” any SDD practitioner will tell you that it’s a heartwarming tale of two friends who overcome all odds during the Great Depression, making it cross-country to California to start a rabbit petting zoo. And it’s that outlook on life that they bring to the team when it comes to setting deadlines, tracking milestones, and general planning.

scan0003

Relentlessly optimistic, the SDD project manager reacts to a missed milestone by reporting to his superiors that everything is a-OK because the team will just make it up by the time they hit the next one. When the next milestone is missed by an even wider margin, same logic applies. Like a shopping addict or degenerate gambler blithely saying, “you gotta spend money to make money,” this project manager will continue to assume on-time delivery right up until the final deadline passes with no end in site. When that happens, it’s no big deal–they just need a week to tie up a few loose ends. When that week is up, it’ll just be one more week to tie up a few loose ends. When that week expires, they face reality. No, just kidding. It’ll just be one more week to tie up a few loose ends. After enough time goes by, members of the team humor him with indulgent baby talk when he says this: “sure it will, man, sure it will. In a week, everything will be great, this will all be behind us, and we’ll celebrate with steaks and lobster at the finest restaurant in town.”

Spoiler alert. At the end of Steinbeck’s novel, the idyllic rabbit farm exists only in the mind of one of the friends, shortly before he’s shot in the back of the head by the other, in an act that is part merciful euthanasia and part self-preservation. The corporate equivalent of this is what eventually happens to our project manager. Every week he insists that everything will be fine and that they’re pretty close to the promised land until someone puts the project out of its misery.

Shooting-Star-Driven Development (SSDD)

Steinbeck-Driven Development is not for everyone. It requires a healthy ability to live in deluded fantasy land (or, in the case of the novel, to be a half-wit). SSDD project managers are not the relentless optimists that their SDD counterparts are. In fact, they’re often pretty maudlin, having arrived at a PM post on a project that everyone knows is headed for failure and basically running out the clock until company bankruptcy or retirement or termination or something. These are the equivalents of gamblers that have exhausted their money and credit and are playing at the penny tables in the hopes that their last few bucks will take them on an unprecedented win streak. Or, perhaps more aptly, they’re like a lonely old toy-maker, sitting in his woodshop, hoping for a toy to come to life and keep them company.

This PM and his project are doomed to failure, so he rarely bothers with status meetings, creates a bare minimum of power points, and rarely ever talks about milestones. Even his Gantt charts have a maximum of three nested dependencies. It’s clear to all that he’s phoning it in. He knows it’s unlikely, but he pins his slim hope to a shooting star: maybe one of his developers will turn out to be the mythical 100x developer that single-handedly writes the customer information portal in the amount of time that someone, while struggling to keep a straight face, estimated it would take to do.

As projects go along and fall further and further behind schedule and the odds of a shooting star developer become more and more remote, the SSDD project manager increasingly withdraws. Eventually, he just kind of fades away. If Geppetto were a real life guy, carving puppets and asking stars to make them real children, he’d likely have punched out in an 19th century sanitarium. There are no happy endings on SSDD projects–just lifeless, wooden developers and missed deadlines.

Fear-Driven Development (FDD)

There is no great mystery to FDD projects. The fate of the business is in your hands, developers. Sorry if that’s a lot of pressure, but really, it’s in your hands.

The most important part of a FDD project is to make it clear that there will be consequences–dire consequences–to the business if the software isn’t delivered by such and such date. And, of course, dire consequences for the business are pretty darned likely to affect the software group. So, now that everyone knows what’s at stake, it’s time to go out and get that complex, multi-tiered, poorly-defined application built in the next month. Or else.

Unlike most waterfall projects, FDD enters the death march phase pretty much right from the start of coding. (Other waterfall projects typically only start the death march phase once the testing phase is cancelled and the inevitability of missing the deadline is clear.) The developers immediately enter a primal state of working fourteen hours per day because their very livelihoods hang in the balance. And, of course, fear definitely has the effect of getting them to work faster and harder than they otherwise would, but it also has the side effect of making the quality lower. Depending on the nature of the FDD project and the tolerance level of the customers for shoddy or non-functional software, this may be acceptable. But if it isn’t, time for more fear. Consequences become more dire, days become longer, and weekends are dedicated to the cause.

The weak have nervous breakdowns and quit, so only the strong survive to quit after the project ends.

Passive-Aggressive-Driven Development (PADD)

One of the most fun parts of waterfall development is the the estimation from ignorance that takes place during either the requirements or design days. This is where someone looks at a series of Visio diagrams and says, “I think this project will take 17,388.12 man-hours in the low risk scenario and 18,221.48 in the high-risk scenario.” The reason I describe this as fun is because it’s sort of like that game you play where everyone guesses the number of gumballs in a giant jar of gumballs and whoever is closest without going over wins a prize. For anything that’s liable to take longer than a week, estimation in a waterfall context is a ludicrous activity that basically amounts to making things up and trying to keep a straight face as you convince yourself and others that you did something besides picking a random number.

Well, I broke this task up into 3,422 tasks and estimated each of those, so if they each take four hours, and everything goes smoothly when we try to put them all together with an estimate for integration of… ha! Just kidding! My guess is 10,528 hours–ten because I was thinking that it’d have to be five digits, the fve because it’s been that many days that we’ve been looking at these Gantt charts and sequence diagrams, and twenty-eight because that was my number in junior high football. And you can’t bid one hour over me because I’m last to guess!

But PADD PMs suck all of the fun out of this style of estimation by pressuring the hours guessers (software developers) into retracting and claiming less time. But they don’t do it by showing anger–the aggression is indirect. When the developer says that task 1,024, writing the batch file import routine, will take approximately five hours, the PADD PM says, “Oh, wow. Must be pretty complicated. Jeez, I just assumed that a senior level developer could bang that out in no more than two. My bad.” Shamed, the developer retracts: “No, no–you’re right. I figured the EDI would be more complicated than it was, so I just realized that my estimate is actually two hours.”

Repeated in aggregate, the PADD PM is some kind of spectacular black belt/level 20/guru/whatever metric is used to measure PM productivity, because he just reduced the time to market by 60% before a single line of code was ever written. Amazing! Of course, talk at the beginning of the project is cheap. The real measure of waterfall project success is figuring out who to blame and getting others to absorb the cost when the project gets way behind schedule. And this is where the PADD master really shines.

To his bosses, he says, “man, I guess I just had too much faith in our guys–I mean, I know you hire the best.” To the developers, he says, “boy, your estimates seemed pretty reasonable to me, so I would have assumed that everything would be going on time if you were just putting in the hours and elbow grease… weird.” To the end-users/stakeholders, he says, “it’s strange, all of our other stakeholders who get us all of the requirements clearly and on time get their software on time–I wonder what happened here.”

There’s plenty of blame to go around, and PADD PMs make sure everyone partakes equally and is equally dissatisfied with the project.

By

Exception Handling Basics

The other day, I was reviewing some code, and I saw a series of methods conforming to the following (anti) ‘pattern’

public class CustomerProcessor
{
    public void ProcessCustomer(Customer customer)
    {
        try
        {
            if (customer.IsActive)
                ProcessActiveCustomer(customer);
        }
        catch (Exception ex)
        {
            throw ex;
        }
    }

    private void ProcessActiveCustomer(Customer customer)
    {
        try
        {
            CheckCustomerName(customer);
            WriteCustomerToFile(customer);
        }
        catch (Exception ex)
        {
            throw ex;
        }
    }

    public void CheckCustomerName(Customer customer)
    {
        try
        {
            if (customer.Name == null)
                customer.Name = string.Empty;
        }
        catch (Exception ex)
        {
            throw ex;
        }
    }

    private void WriteCustomerToFile(Customer customer)
    {
        try
        {
            using (StreamWriter writer = new StreamWriter(@"C:\temp\customer.txt"))
            {
                writer.WriteLine(customer.Name);
            }
        }
        catch (Exception ex)
        {
            throw ex;
        }
    }
}

Every method consisted of a try with the actual code of interest inside of it and then a caught general exception that was then thrown. As I looked more through the code base, it became apparent to me that this was some kind of ‘standard’ (and thus perhaps exhibit A of how we get standards wrong). Every method in the project did this.

If you’re reading and you don’t know why this is facepalm, please read on. If you’re well-versed in C# exceptions, this will probably be review for you.

Preserve the Stack Trace

First things (problems) first. When you throw exceptions in C# by using the keyword “throw” with some exception type, you rip a hole in the fabric of your application’s space-time–essentially declaring that if no code that’s called knows how to handle the singularity you’re belching out, the application will crash. I use hyperbolic metaphor to prove a point. Throwing an exception is an action that jolts you out of the normal operation of your program by using a glorified “GOTO,” except that you don’t actually know where it’s going because that’s up to the code that called you.

When you do this, the .NET framework is helpful enough to package up a bunch of runtime information for troubleshooting purposes, including something called the “stack trace.” If you’ve ever seen a .NET (or Java) site really bomb out, you’ve probably seen one of these–it’s a bunch of code with line numbers that basically tells you, “A called B, which called C, which called D … which called Y, which called Z, which threw up and crashed your program.” When you throw an exception in C# the framework saves the stack trace that got you to the method in question. This is true whether the exception happens in your code or deep, deep within some piece of code that you rely on.

So, in the code above, let’s consider what happens when the code is executed on a machine with no C:\temp folder. The StreamWriter constructor is going to throw an exception indicating that the path in question is not found. When it does, you will have a nice exception that tells you ProcessCustomer called ProcessActiveCustomer, which called WriteCustomerToFile, which called new StreamWriter(), which threw an exception because you gave it an invalid path. Pretty neat, huh? You just have to drill into the exception object in the debugger to see all of this (or have your application configured to display this information in a log file, on a web page, etc.).

But what happens next is kind of a bummer. Instead of letting this exception percolate up somewhere, we trap it right there in the method in our catch block. At that point, we throw an exception. Now remember, when you throw an exception object, the stack trace is recorded at the point that you throw, and any previous stack trace is blown away. Instead of it being obvious that the exception originated in the StreamWriter constructor, it appears to have originated in WriteCustomerToFile. But wait, it gets worse. From there, the exception is trapped in ProcessActiveCustomer and then again in ProcessCustomer. Since every method in the code base has this boilerplate, every exception generated will percolate back up to main and appear to have been generated there.

To put this in perspective, you will never be able to see or record the stack trace for the point at which the exception was thrown. Now, in development that’s not the end of the world since you can set the debugger to break where thrown instead of handled, but for production logging, this is awful. You’ll never have the foggiest idea where anything is coming from.

How to fix this? It’s as simple as getting rid of the “throw ex;” in favor of just “throw;” This preserves the stack trace while passing the exception on to the next handler. Another alternative, should you wish to add more information when you throw, would be to do “throw new Exception(ex)” where you pass the exception you’ve caught to a new one that you’re creating. The caught exception will be preserved, intact, and can be accessed in debugging via the “InnerException” property of the one you’re now throwing.

public class CustomerProcessor
{
    public void ProcessCustomer(Customer customer)
    {
        try
        {
            if (customer.IsActive)
                ProcessActiveCustomer(customer);
        }
        catch (Exception ex)
        {
            throw;
        }
    }

    private void ProcessActiveCustomer(Customer customer)
    {
        try
        {
            CheckCustomerName(customer);
            WriteCustomerToFile(customer);
        }
        catch (Exception ex)
        {
            throw;
        }
    }

    public void CheckCustomerName(Customer customer)
    {
        try
        {
            if (customer.Name == null)
                customer.Name = string.Empty;
        }
        catch (Exception ex)
        {
            throw;
        }
    }

    private void WriteCustomerToFile(Customer customer)
    {
        try
        {
            using (StreamWriter writer = new StreamWriter(@"C:\temp\customer.txt"))
            {
                writer.WriteLine(customer.Name);
            }
        }
        catch (Exception ex)
        {
            throw;
        }
    }
}

(It would actually be better here to remove the Exception ex altogether in favor of just “catch {” but I’m leaving it in for illustration purposes)

Minimize Exception-Aware Code

Now that the stack trace is going to be preserved, the pattern here isn’t actively hurting anything in terms of program flow or output. But that doesn’t mean we’re done cleaning up. There’s still a lot of code here that doesn’t need to be.

In this example, consider that there are only two methods that can generate exceptions: ProcessCustomer (if passed a null reference) and WriteCustomerToFile (various things that can go wrong with file I/O). And yet, we have exception handling in every method, even methods that are literally incapable of generating them on their own. Exception throwing and handling is extremely disruptive and it makes your code very hard to reason about. This is because exceptions, as mentioned earlier, are like GOTO statements that whip the context of your program from wherever the exception is generated to whatever place ultimately handles exceptions. Oh, and the boilerplate for handling them makes methods hard to read.

The approach shown above is a kind of needlessly defensive approach that makes the code incredibly dense and confusing. Rather than a strafing, shock-and-awe show of force for dealing with exceptions, the best approach is to reason carefully about where they might be generated and how one might handle them. Consider the following rewrite:

public class CustomerProcessor
{
    public void ProcessCustomer(Customer customer)
    {
        if(customer == null)
            Console.WriteLine("You can't give me a null customer!");
        try
        {
            ProcessActiveCustomer(customer);
        }
        catch (SomethingWentWrongWritingCustomerFileException)
        {
            Console.WriteLine("There was a problem writing the customer to disk.");
        }
    }

    private void ProcessActiveCustomer(Customer customer)
    {
        CheckCustomerName(customer);
        WriteCustomerToFile(customer);
    }

    public void CheckCustomerName(Customer customer)
    {
        if (customer.Name == null)
            customer.Name = string.Empty;
    }

    private void WriteCustomerToFile(Customer customer)
    {
        try
        {
            using (var writer = new StreamWriter(@"C:\temp\customer.txt"))
            {
                writer.WriteLine(customer.Name);
            }
        }
        catch (Exception ex)
        {
            throw new SomethingWentWrongWritingCustomerFileException("Ruh-roh", ex);
        }
    }
}

Notice that we only think about exceptions at the ‘endpoints’ of the little application. At the entry point, we guard against a null argument instead of handling it with an exception. As a rule of thumb, it’s better to handle validation via querying objects than by trying things and catching exceptions, both from a performance and from a readability standpoint. The other point of external interaction where we think about exceptions is where we’re calling out to the filesystem. For this example, I handle any exception generated by stuffing it into a custom exception type and throwing that back to my caller. This is a practice that I’ve adopted so that I know at a glance when debugging if it’s an exception I’ve previously reasoned about and am trapping or if some new problem is leaking through that I didn’t anticipate. YMMV on this approach, but the thing to take away is that I deal with exceptions as soon as they come to me from something beyond my control, and then not again until I’m somewhere in the program that I want to report things to the user. (In an actual application, I would handle things more granularly than simply catching Exception, opting instead to go as fine-grained as I needed to in order to provide meaningful reporting on the problem)

Here it doesn’t seem to make a ton of difference, but in a large application it will–believe me. You’ll be much happier if your exception handling logic is relegated to the places in the app where you provide feedback to the user and where you call external stuff. In the guts of your program, this logic isn’t necessary if you simply take care to write code that doesn’t contain mistakes like null dereferences.

What about things like out of memory exceptions? Don’t you want to trap those when they happen? Nope. Those are catastrophic exceptions beyond your control, and all of the logging and granular worrying about exceptions in the world isn’t going to un-ring that bell. When these happen, you don’t want your process to limp along unpredictably in some weird state–you want it to die.

On the Lookout for Code Smells

One other meta-consideration worth mentioning here is that if you find it painful to code because you’re putting the same few lines of code in every class or every method, stop and smell what your code is trying to tell you. Having the same thing over and over is very much not DRY and not advised. You can spray deodorant on it with something like a code snippet, but I’d liken this to addressing a body odor problem by spraying yourself with cologne and then putting on a full body sweatsuit–code snippets for redundant code make things worse while hiding the symptoms.

If you really feel that you must have exception handling in every method, there are IL Weaving tools such as PostSharp that free you from the boilerplate while letting you retain the functionality and granularity you want. As a general rule of thumb, if you’re cranking out a lot of code and thinking, “there’s got to be a better way to do this,” stop and do some googling because there almost certainly is.

By

Up or Not: Ambition of the Expert Beginner

In the last post, I talked about the language employed by Expert Beginners to retain their status at the top of a software development group. That post was a dive into the language mechanics of how Expert Beginners justify decisions that essentially stem from ignorance–and often laziness, to boot. They generally have titles like “Principal Engineer” or “Architect” and thus are in a position to argue policy decisions based on their titles rather than on any kind of knowledge or facts supporting the merits of their approach.

In the series in general, I’ve talked about how Expert Beginners get started, become established, and, most recently, about how they fend off new ideas (read: threats) in order to retain their status with minimal effort. But what I haven’t yet covered and will now talk about is the motivations and goals of the Expert Beginner. Obviously, motivation is a complex subject, and motivations will be as varied as individuals. But I believe that Expert Beginner ambition can be roughly categorized into groups and that these groups are a function of their tolerance for cognitive dissonance.

Wikipedia (and other places) defines cognitive dissonance as mental discomfort that arises from simultaneously holding conflicting beliefs. For instance, someone who really likes the taste of steak but believes that it’s unethical to eat meat will experience this form of unsettling stress as he tries to reconcile these ultimately irreconcilable beliefs. Different people have different levels of discomfort that arise from this state of affairs, and this applies to Expert Beginners as much as anyone else. What makes Expert Beginners unique, however, is how inescapable cognitive dissonance is for them.

An Expert Beginner’s entire career is built on a foundation of cognitive dissonance. Specifically, they believe that they are experts while outside observers (or empirical evidence) demonstrate that they are not. So an Expert Beginner is sentenced to a life of believing himself to be an expert while all evidence points to the contrary, punctuated by frequent and extremely unwelcome intrusions of that reality.

So let’s consider three classes of Expert Beginner, distinguished by their tolerance for cognitive dissonance and their paths through an organization.

Xenophobes (Low Tolerance)

An Expert Beginner with a low tolerance for cognitive dissonance is basically in a state of existential crisis, given that he has a low tolerance for the thing that characterizes his career. To put this more concretely, a marginally competent person, inaccurately dubbed “Expert” by his organization, is going to be generally unhappy if he has little ability to reconcile or accept conflicting beliefs. A more robust Expert Beginner has the ability to dismiss evidence against his ‘Expert’ status as wrong or can simply shrug it off, but not Xenophobe. Xenophobe becomes angry, distressed, or otherwise moody when this sort of thing happens.

But Xenophobe’s long term strategy isn’t simply to get worked up whenever something exposes his knowledge gap. Instead, he minimizes his exposure to such situations. This process of minimizing is where the name Xenophobe originates; he shelters himself from cognitive dissonance by sheltering himself from outsiders and interlopers that expose him to it.

If you’ve been to enough rodeos in the field of software development, you’ve encountered Xenophobe. He generally presides over a small group with an iron fist. He’ll have endless reams of coding standards, procedures, policies, rules, and quirky ways of doing things that are non-negotiable and soul-sucking. This is accompanied by an intense dose of micromanagement and insistence on absolute conformity in all matters. Nothing escapes his watchful eye, and his management generally views this as dedication or even, perversely, mentoring.

This practice of micromanagement serves double duty for Xenophobe. Most immediately, it allows him largely to prevent the group from being infected by any foreign ideas. On the occasion that one does sneak in, it allows him to eliminate it swiftly and ruthlessly to prevent the same perpetrator from doing it again. But on a longer timeline, the oppressive micromanagement systematically drives out talented subordinates in favor of malleable, disinterested ones that are fine with brainlessly banging out code from nine to five, asking no questions, and listening to the radio. Xenophobe’s group is the epitome of what Bruce Webster describes in his Dead Sea Effect post.

All that Xenophobe wants out of life is to preserve this state of affairs. Any meaningful change to the status quo is a threat to his iron-fisted rule over his little kingdom. He doesn’t want anyone to leave because that probably means new hires, which are potential sources of contamination. He will similarly resist external pushes to change the group and its mission. New business ventures will be labeled “unfeasible” or “not what we do.”

KingOfSmallKingdom

Most people working in corporate structures want to move up at some point. This is generally because doing so means higher pay, but it’s also because it comes with additional status perks like offices, parking spaces, and the mandate to boss people around. Xenophobe is not interested in any of this (beyond whatever he already has). He simply wants to come in every day and be regarded as the alpha technical expert. Moving up to management would result in whatever goofy architecture and infrastructure he’s set up being systematically dismantled, and his ego couldn’t handle that. So he demurs in the face of any promotion to project management or real management because even these apparently beneficial changes would poke holes in the Expert delusion. You’ll hear Xenophobe say things like, “I’d never want to take my hands off the keyboard, man,” or, “this company would never survive me moving to management.”

Company Men (Moderate Tolerance)

Company Man does not share Xenophobe’s reluctance to move into a line or middle management role. His comfort with this move results from being somewhat more at peace with cognitive dissonance. He isn’t so consumed with preserving the illusion of expertise at all costs that he’ll pass up potential benefits–he’s a more rational and less pathological kind of Expert Beginner.

Generally speaking, the line to a mid-level management position requires some comfort with cognitive dissonance whether or not the manager came into power from the ranks of technical Expert Beginners. Organizations are generally shaped like pyramids, with executives at the top, a larger layer of management in the middle, and line employees at the bottom. It shares more than just shape with a pyramid scheme–it sells to the rank and file the idea that ascension to the top is inevitable, provided they work hard and serve those above them well.

The source of cognitive dissonance in the middle, however, isn’t simply the numerical impossibility that everyone can work their way up. Rather, the dissonance lies in the belief that working your way up has much to do with merit or talent. In other words, only the most completely daft would believe that everyone will inevitably wind up in the CEO’s office (or even in middle management), so the idea bought into by most is this: each step of the pyramid selects its members from the most worthy of the step below it. The ‘best’ line employees become line managers, the ‘best’ line managers become mid-level managers, and so on up the pyramid. This is a pleasant fiction for members of the company that, when believed, inspires company loyalty and often hard work beyond what makes rational sense for a salaried employee.

But the reality is that mid-level positions tend to be occupied not necessarily by the talented but rather by people who have stuck around the company for a long time, people who are friends with or related to movers and shakers in the company, people who put in long hours, people who simply and randomly got lucky, and people who legitimately get work done effectively. So while there’s a myth perpetuated in corporate American that ascending the corporate ‘ladder’ (pyramid) is a matter of achievement, it’s really more of a matter of age and inevitability, at least until you get high enough into the C-level where there simply aren’t enough positions for token promotions. If you don’t believe me, go look at LinkedIn and tell me that there isn’t a direct and intensely strong correlation between age and impressiveness of title.

So, to occupy a middle management position is almost invariably to drastically overestimate how much talent and achievement it took to get to where you are. That may sound harsh, but “I worked hard and put in long hours and eventually worked my way up to an office next to the corner office” is a much more pleasant narrative than “I stuck with this company, shoveled crap, and got older until enough people left to make this office more or less inevitable.” But what does all of this have to do with Expert Beginners?

Well, Expert Beginners that are moderately tolerant of cognitive dissonance have approximately the same level of tolerance for it as middle management, which is to say, a fair amount. Both sets manage to believe that their positions were earned through merit while empirical evidence points to them getting there by default and managing not to fumble it away. Thus it’s a relatively smooth transition, from a cognitive dissonance perspective, for a technical Expert Beginner to become a manager. They simply trade technical mediocrity for managerial mediocrity and the narrative writes itself: “I was so good at being a software architect that I’ve earned a shot and will be good at being a manager.”

The Xenophobe would never get to that point because asking him to mimic competence at a new skill-set is going to draw him way outside of his comfort zone. He views moving into management as a tacit admission that he was in over his head and needed to be promoted out of danger. Company Man has no such compunction. He’s not comfortable or happy when people in his group bring in outside information or threaten to expose his relative incompetence, but he’s not nearly as vicious and reactionary as Xenophobe, as he can tolerate the odd creeping doubt of his total expertise.

In fact, he’ll often alleviate this doubt by crafting an “up after a while” story for himself vis-a-vis management. You’ll hear him say things like, “I’m getting older and can’t keep slinging code forever–sooner or later, I’ll probably just have to go into management.” It seems affable enough, but he’s really planning a face-saving exit strategy. When you start out not quite competent and insulate yourself from actual competence in a fast-changing field like software, failure is inevitable. Company Man knows this on some subconscious level, so he plans and aspires to a victorious retreat. This continues as high as Company Man is able to rise in the organization (though non-strategic thinkers are unlikely to rise much above line manager, generally). He’s comfortable with enough cognitive dissonance at every level that he doesn’t let not being competent stop him from assuming that he is competent.

Master Beginners (High Tolerance)

If Xenophobes want to stay put and Company Men want to advance, you would think that the class of people who have high tolerance for and thus no problem with cognitive dissonance, Master Beginners, would chomp at the bit to advance. But from an organizational perspective, they really don’t. Their desired trajectory from an org chart perspective is somewhere between Xenophobe and Company Man. Specifically, they prefer to stay put in a technical role but to expand their sphere of influence, breadth-wise, to grow the technical group under their tutelage. Perhaps at some point they’d be satisfied to be CTO or VP of Engineering or something, but only as long as they didn’t get too far away from their domain of ‘expertise.’

Master Beginners are utterly fascinating. I’ve only ever encountered a few of these in my career, but it’s truly a memorable experience. Xenophobes are very much Expert Beginners by nurture rather than nature. They’re normal people who backed their way into a position for which they aren’t fit and thus have to either admit defeat (and, worse, that their main accomplishment in life is being in the right place at the right time) or neurotically preserve their delusion by force. Company Men are also Expert Beginners by nurture over nature, though for them it’s less localized than Xenophobes. Company Men buy into the broader lie that advancement in command-and-control bureaucratic organizations is a function of merit. If a hole is poked in that delusion, they may fall, but a lot of others come with them. It’s a more stable fiction.

But Master Beginners are somehow Expert Beginners by nature. They are the meritocratic equivalent of sociopaths in that their incredible tolerance for cognitive dissonance allows them glibly and with an astonishing lack of shame to feign expertise when doing so is preposterous. It appears on the surface to be completely stunning arrogance. A Master Beginner would stand up in front of a room full of Java programmers, never having written a line of Java code in his life, and proceed to explain to them the finer points of Java, literally making things up as he went. But it’s so brazen–so utterly beyond reason–that arrogance is not a sufficient explanation. It’s like the Master Beginner is a pathological liar of some kind (though he’s certainly also arrogant.) He most likely actually believes that he knows more about subjects he has no understanding of than experts in those fields because he’s just that brilliant.

This makes him an excellent candidate for Expert Beginnerism both from an external, non-technical perspective and from a rookie perspective. To put it bluntly, both rookies and outside managers listen to him and think, “wow, that must be true because nobody would have the balls to talk like that unless they were absolutely certain.” This actually tends to make him better at Expert Beginnerism than his cohorts who are more sensitive to cognitive dissonance, roughly following the psychological phenomenon coined by Walter Langer:

People will believe a big lie sooner than a little one. And if you repeat it frequently enough, people will sooner than later believe it.

So the Master Beginner’s ambition isn’t to slither his way out of situations where he might be called out on his lack–he actually embraces them. The Master Beginner is utterly unflappable in his status as not just an expert, but the expert, completely confident that things he just makes up are more right than things others have studied for years. Thus the Master Beginner seeks to expand aggressively. He wants to grow the department and bring more people under his authority. He’ll back down from no challenge to his authority from any angle, glibly inventing things on the spot to win any argument, pivoting, spinning, shouting, threatening–whatever the situation calls for. And he won’t stop until everyone hails him as the resident expert and does everything his way.

Success?

I’ve talked about the ambitions of different kinds of Expert Beginners and what drives them to aspire to these ends. But a worthwhile question to ask is whether or not they tend to succeed and why or why not. I’m going to tackle the fate of Expert Beginners in greater detail in my next post on the subject, but the answer is, of course, that it varies. What tends not to vary, however, is that Expert Beginner success is generally high in the short term and drops to nearly zero on a long enough time line, at least in terms of their ambitions. In other words, success as measured by Expert Beginners themselves tends to be somewhat ephemeral.

It stands to reason that being deluded about one’s own competence isn’t a viable, long-term success strategy. There is a lesson to be learned from the fate of Expert Beginners in general, which is that better outcomes are more likely if you have an honest valuation of your own talents and skills. You can generally have success on your own terms through the right combination of strategy, dedication, and earnest self-improvement, but to improve oneself requires a frank and honest inventory of one’s shortcomings. Anything short of that, and you’re simply advancing via coincidence and living on borrowed time.

Edit: The E-Book is now available. Here is the publisher website which contains links to the different media for which the book is available.

By

How Stagnation is Justified: Language of the Expert Beginner

So far in the “Expert Beginner” series of posts, I’ve chronicled how Expert Beginners emerge and how they wind up infecting an entire software development group. Today I’d like to turn my attention to the rhetoric of this archetype in a software group already suffering from Expert Beginner-induced rot. In other words, I’m going to discuss how Expert Beginners deeply entrenched in their lairs interact with newbies to the department.

It’s no accident that this post specifically mentions the language, rather than interpersonal interactions, of the Expert Beginner. The reason here is that the actions aren’t particularly remarkable. They resemble the actions of any tenured employee, line manager or person in a position of company trust. They delegate, call the shots, set policy, and probably engage in status posturing where they play chicken with meeting lateness or sit with their feet on the table when talking to those of lesser organizational status. Experts and Expert Beginners are pretty hard to tell apart based exclusively on how they behave. It’s the language that provides a fascinating tell.

Most people, when arguing a position, will cite some combination of facts and deductive or inductive reasoning, perhaps with the occasional logical fallacy sprinkled in by mistake. For instance, “I left the windows open because I wanted to air out the house and I didn’t realize it was supposed to rain,” describes a choice and the rationale for it with an implied mea culpa. The Expert Beginner takes a fundamentally different approach, and that’s what I’ll be exploring here.

False Tradeoffs and Empty Valuations

If you’re cynical or intelligent with a touch of arrogance, there’s an expression you’re likely to find funny. It’s a little too ubiquitous for me to be sure who originally coined the phrase, but if anyone knows, I’m happy to amend and offer an original source attribution. The phrase is, “Whenever someone says ‘I’m not book smart, but I’m street smart,’ all I hear is, ‘I’m not real smart, but I’m imaginary smart.'” I had a bit of a chuckle the first time I read that, but it’s not actually what I, personally, think when I hear someone describe himself as “street smart” rather than “book smart.” What I think is being communicated is “I’m not book smart, and I’m sort of sensitive about that, so I’d like that particular valuation of people not to be emphasized by society.” Or, more succinctly, “I’m not book smart, and I want that not to be held against me.”

“Street smart” is, at its core, a counterfeit status currency proffered in lieu of a legitimate one. It has meaning only in the context of it being accepted as a stand-in for the real McCoy. If I get the sense that you’re considering accepting me into your club based on the quantity of “smarts” that I have, and I’m not particularly confident that I can come up with the ante, I offer you some worthless thing called “street smarts” and claim that it’s of equal replacement value. If you decide to accept this currency, then I win. And, interestingly, if enough other people decide to accept it, then it becomes a real form of currency (which I think it’d be pretty easy to argue that “street smart” has).

Whatever you may think of the “book smart vs street smart” dichotomy notwithstanding, you’d be hard pressed to argue that the transaction doesn’t follow the pattern of “I want X,” “I don’t have that, but I have Y (and I’m claiming Y is just as good).” And understanding this attempted substitution is key to understanding one of the core planks of the language of Expert Beginners. They are extremely adept at creating empty valuations as stand-ins for meaningful ones. To see this in action, consider the following:

  1. Version control isn’t really that important if you have a good architecture where two people never have to touch the same file.
  2. We don’t write unit tests because our developers spend extra time inspecting the code after they’ve written it.
  3. Yeah, we don’t do a lot of Java here, but you can do anything with Perl that you can with Java.
  4. Our build may not be automated, but it’s very scientific and there’s a lot of complicated stuff that requires an expert to do manually.
  5. We don’t need to be agile or iterative because we write requirements really well.
  6. We save a lot of money by not splurging on productivity add-ins and fancy development environments, and it makes our programmers more independent.

In all cases here, the pattern is the same. The Expert Beginner takes something that’s considered an industry standard or best practice, admits to not practicing it, and offers instead something completely unacceptable (or even nonsensical/made up) as a stand-in, implying that you should accept the switch because they say so.

Condescension and Devaluations

This language tactic is worth only a brief mention because it’s pretty obvious as a ploy, and it squanders a lot of realpolitik capital in the office if anyone is paying attention. It’s basically the domain-specific equivalent of some idiot being interviewed on the local news, just before dying of hurricane, saying something like “I’m not gonna let a buncha fancy Harvard science-guys tell me about storms–I’ve lived here for forty years and I can feel ’em comin’ in my bones. If I need to evacuate, I’ll know it!”

In his fiefdom, an Expert Beginner is obligated to have some explanation for ignoring best practices that at least rises to the level of sophistry and offers some sort of explanation, however improbable. This is where last section’s false valuations shine. Simply scoffing at best practices and new ideas has to be done sparingly or upper management will start to notice and create uncomfortable situations. And besides, this reaction is frankly beneath the average Expert Beginner–it’s how a frustrated and petulant Novice would react. Still, it will occasionally be trotted out in a pinch and can be effective in that usage scenario since it requires no brain cells and will just be interpreted as passion rather than intellectual laziness.

The Angry Driver Effect

If you ever watch a truly surly driver on the highway, you’ll notice an interesting bit of irritable cognitive bias against literally everyone else on the road. The driver will shake her fist at motorists passing her, calling them “maniacs,” while shaking the same fist at those going more slowly, calling them “putzes.” There’s simply no pleasing her.

An Expert Beginner employs this tactic with all members of the group as well, although without the anger. For example, if she has a Master’s degree, she will characterize solutions put forth by those with Bachelor’s degrees as lacking formal polish, while simultaneously characterizing those put forth by people with PhDs as overly academic or out of touch. If the solution different from hers is presented by someone that also has a Master’s, she will pivot to another subject.

Is your solution one that she understands immediately? Too simplistic. Does she not understand it? Over-engineered and convoluted. Are you younger than her? It’s full of rookie mistakes. Older? Out of touch and hackneyed. Did you take longer than it would have taken her? You’re inefficient. Did it take you less time? You’re careless. She will keep pivoting, as needed, ad infinitum.

Taken individually, any one of these characterizations makes sense and impresses. In a way, it’s like the cold-reading game that psychics play. Here the trick is to identify a personal difference and characterize it; anything produced by its owner as negative. The Expert Beginner controls the location of the goalposts via framing in the same way that the psychic rattles off a series of ‘predictions’ until one is right, as evidenced by micro-expressions. The actual subtext is, “I’m in charge and I get to define good and bad, so good is me, and some amount less good is you.”

Interestingly, the Expert Beginner’s definition of good versus bad is completely orthogonal to any external characterizations of the same. For instance, if the Expert Beginner had been a C student, then, in her group, D students would be superior to A students because of their relative proximity to the ideal C student. The D students might be “humble, but a little slow,” while A students would be “blinded by their own arrogance,” or some such thing. It’s completely irrelevant that society at large considers A students to be of the most value.

Experts are Wrong

Given that Expert Beginners are of mediocre ability by definition, the subject of expertise is a touchy one. Within the small group, this isn’t really a problem since the Expert Beginner is the designated Expert there by definition. But within a larger scope, actual Experts exist, and they do present a problem–particularly when group members are exposed to them and realize that discrepancies exist.

For instance, let’s say that an Expert Beginner in a small group has never bothered with source control for code, due to laziness and a simple lack of exposure. This decision is likely settled case-law within the group, having been justified with something like the “good architecture” canard from the Empty Valuations section. But if any group member watches a Pluralsight video or attends a conference which exposes them to industry experts and best practices, the conflict becomes immediately apparent and will be brought to the attention of the reigning Expert Beginner. In the last post, I made a brief example of an Expert Beginner reaction to such a situation: “you can’t believe everything you see on TV.”

This is the simplest and most straightforward reaction to such a situation. The Expert Beginner believes that he and his ‘fellow’ Expert have a simple difference of opinion among ‘peers.’ While it may be true that one Expert speaks at conferences about source control best practices and the other one runs the IT for Bill’s Bait Shop and has never used source control, either opinion is just as valid. But on a long enough timeline, this false relativism falls apart due to mounting disagreement between the Expert Beginner and real Experts.

When this happens, the natural bit of nuance that Expert Beginners introduce is exceptionalism. Rather than saying, “well, source control or not, either one is fine,” and risk looking like the oddball, the Expert Beginner invents a mitigating circumstance that would not apply to other Experts, effectively creating an argument that he can win by forfeit. (None of his opponents are aware of his existence and thus offer no counter-argument.) For instance, the Bait Shop’s Expert Beginner might say, “sure, those Experts are right that source control is a good idea in most cases, but they don’t understand the bait industry.”

This is a pretty effective master-stroke. The actual Experts have been dressed down for their lack of knowledge of the bait industry while the Expert Beginner is sitting pretty as the most informed one of the bunch. And, best of all, none of the actual Experts are aware of this argument, so none of them will bother to poke holes in it. Crisis averted.

All Qualitative Comparisons Lead Back to Seniority

A final arrow in the Expert Beginner debate quiver is the simple tactic of non sequitur about seniority, tenure, or company experience. On the surface this would seem like the most contrived and least credible ploy possible, but it’s surprisingly effective in corporate culture, where seniority is the default currency in the economy of developmental promotions. Most denizens of the corporate world automatically assign value and respect to “years with the company.”

Since there is no bigger beneficiary of this phenomenon than an Expert Beginner, he plows investment into it in an attempt to drive the market price as high as possible. If you ask the Expert Beginner why there is no automated build process, he might respond with something like, “you’ll understand after you’ve worked here for a while.” If you ask him this potentially embarrassing question in front of others, he’ll up the ante to “I asked that once too when I was new and naive–you have a lot to learn,” at which time anyone present is required by corporate etiquette to laugh at the newbie and nervously reaffirm that value is directly proportional to months employed by Acme Inc.

The form and delivery of this particular tactic will vary a good bit, but the pattern is the same at a meta-level. State your conclusion, invent a segue, and subtly remind everyone present that you’ve been there the longest. “We tried the whole TDD thing way back in 2005, and I think all of the senior developers and project managers know how poorly that went.” “Migrating from VB6 to something more modern definitely sounds like a good idea at first, but there are some VPs you haven’t met that aren’t going to buy that one.”

It goes beyond simple non sequitur. This tactic serves as a thinly veiled reminder as to who calls the shots. It’s a message that says, “here’s a gentle reminder that I’ve been here a long time and I don’t need to justify things to the likes of you.” Most people receive this Expert Beginner message loudly and clearly and start to join in, hopeful for the time they can point the business end at someone else as part of the “Greater Newbie Theory.”

Ab Hominem

In the beginning of this post, I talked about the standard means for making and/or defending arguments (deductive or inductive reasoning) and how Expert Beginners do something else altogether. I’ve provided a lot of examples of it, but I haven’t actually defined it. The central feature of the Expert Beginner’s influence-consolidation language is an inextricable fusing of arguer and argument, which is the polar opposite of standard argument form. For instance, it doesn’t matter who says, “if all humans have hearts, and Jim is a human, then Jim has a heart.” The argument stands on its own. But it does matter who says, “Those of us who’ve been around for a while would know why not bothering to define requirements is actually better than SCRUM.” That argument is preposterous from an outsider or a newbie but acceptable from an Expert Beginner.

A well-formed argument says, “if you think about this, you’ll find it persuasive.” The language of the Expert Beginner says, “it’s better if you don’t think about this–just remember who I am, and that’s all you need to know.” This can be overt, such as with the seniority dropping, or it can be more subtle, such as with empty valuations. It can also be stacked so that a gentle non sequitur can be followed with a nastier “get off of my lawn” type of dressing down if the first message is not received.

In the end, it all makes perfect sense. Expert Beginners arrive at their stations through default, rather than merit. As such, they have basically no practice at persuading anyone to see the value of their ideas or at demonstrating the superiority of their approach. Instead, the only thing they can offer is the evidence that they have of their qualifications–their relative position of authority. And so, during any arguments or explanations, all roads lead back to them, their position titles, their time with the company, and the fact that their opinions are inviolate.

If you find yourself frequently making arguments along the lines of the ones that I’ve described here, I’d suggest putting a little more thought and effort into them from now on. No matter who you are or how advanced you may be, having to defend your opinions and approaches is an invaluable skill that should be kept as sharp as possible. You’ll often learn just as much from justifying your approach as formulating it in the first place. If you’re reading this article, it’s pretty unlikely that you’re an Expert Beginner. And, assuming that you’re not, you probably want to make sure people don’t confuse you with one.

Next: “Up or Not: Ambition of the Expert Beginner”

Edit: The E-Book is now available. Here is the publisher website which contains links to the different media for which the book is available.

By

How We Get Coding Standards Wrong

The other day, I sat in on a meeting where a large-ish group was discussing “standards” for their particular area of software development. I have the word standards in quotes because, by design, there wasn’t a clear definition as to what sorts of standards they would be; it was an open-ended exercise. The standard could cover anything from variable casing to development practices and principles to holistic approaches. When asked for my input, I was sort of bemused by the process, and I said that I didn’t really have much in the way of an answer. I declined to elaborate much more on that since I wasn’t interested in derailing the meeting in any way, but it did get me to thinking about why the exercise seemed sort of futile to me.

I generally have a natural leeriness when it comes to coding and development standards and especially activities designed to flesh those out, and in this post I’d like to explore why. It isn’t that I don’t believe standards should exist or that I believe they aren’t important. It’s just that I think we frequently miss the point and create standards out of some sense that it’s The Right Thing, and thus create standards that are pointless or even detrimental.

Standards by Committee Anti-Pattern

One problem with defining standards in a group setting is that any group containing some socially savvy people is going to gravitate toward diplomacy. Contentious and arbitrary subjects (so-called “religious wars”) like camel case versus Pascal case or where the bracket after a function goes will be avoided in favor or things upon which a consensus may be reached. But think about what’s actually happening–everyone’s agreeing that the things that everyone already does should be standardized. This is a fairly vacuous exercise in bureaucracy, useful only in the hypothetical realm where a new person comes on board and happens to disagree with something upon which twenty others agree.

People doing this are solving a problem that doesn’t exist: “how do we make sure everyone does this the same way when everyone’s currently doing it the same way?” It also tends to favor documenting current process rather than thinking critically about ideal process.

Let’s capture all of the stuff that we all do and write it down. Okay, so, coding standards. When working on a .NET project, first drive to the office. Then, have your keycard ready to get in the building. Next, enter the building…

Obviously this is silly, but hopefully the point hits home. The simple fact that you do something or that everyone in the group does something doesn’t mean that it’s worth capturing as trainable knowledge and enforcing on the group. And yet this is a direction I frequently see groups take as they get into a groove of “yes, and” when discussing standards. It can just turn into “let’s make a list of everything we do.”

Pointless Homogeneity

The concept of capturing the intersection of everyone’s approach and coding style dovetails into another problem with groups hashing out standards: a group-think bias. Slightly different from the notion that everything common should be documented, this is the notion that everything should be common. For instance, I once worked in a shop where developers were all mandated to use the same diff tool. I’m not kidding. If anyone bothered with a justification for this, I don’t recall what it was, other than some nod to pointless standards.

CookieCutter

You can take this pretty far. Imagine demands that you use the same syntax highlighting colors as your peers or that you keep your file system organized in the same way as everyone else. What does this have to do with the code you’re producing? Who knows…

It might seem like the kind of thing where you should just indulge the the harmless control freak driving it or the group that dreams it up as a unit, but this runs the risk of birthing a toxic culture. With everything, however inconsequential, homogenized, there is no room for creative thinkers to innovate with new approaches.

Make-Work Tasks

Another risk you run when coming up with standards is to create so-called standards that amount to codifying and mandating busy-work. I’ve made my evolving opinion of comments in code quite clear on a few occasions, and I consider them to be an excellent example. When you make “comment every method” a standard, you’re standardizing the procedure (mindlessly adding comments) and not the goal (clarity and communication).

There are plenty of other examples one might dream up. The silly mandate of “sort and organize usings” that I blogged about some time back comes to mind. This is another example of standardizing pointless make-work tasks that provide no substantive benefit. In all cases, the problem is that you’re not only asking developers to engage in brainless busy-work–you’re codifying it as an official mandate.

Getting Too Specific

Another source of issues that I’ve seen in the establishment of standards is a tendency to get too specific. “What sort of convention should we use when we declare a generic interface below an enumeration inside of a nested class?” Really? Does that come up often enough that it’s important for everyone to get on the same page about how to approach it?

I recognize the human desire for set closure; we don’t like it when a dresser is missing a drawer or when we haven’t collected the whole set, but sometimes you’ve just got to let it go. We’re not the IRS–it’s going to be alright if there are contingencies that we haven’t covered and oddball loopholes that we haven’t addressed.

Missing the Point

For me, this is the biggest one. Usually standards discussions are about superficial programming concerns rather than substantive ones, and that’s unfortunate. It is the aforementioned camel vs Pascal case wars or whether to put brackets and which kinds to use. To var or not to var? Should constants be all caps? If an interface is in a forest and doesn’t have an “I” in front of its name, is it still an interface?

I understand the benefit of consistency in naming, casing, and other syntactic considerations. I really do, in spite of my tendency to be dismissive and iconoclast on this front when discussing them. But, first off, let’s not pretend that there really is a right way with these things–there’s just the way that you’re used to doing them. And, more importantly, let’s not pretend that this is really all that important in the grand scheme of things.

We use consistent casing and naming so that a reader of the code can tell at a glance whether something is a field or a local variable or whether something is a method or a property or a constant. It’s really about promoting readability, which, in turn, is about maximizing maintainability. But you know what’s much harder on maintainability than Jones’s great constant casing blunder of 2010 where he forgot to use ALL CAPS? Writing bad code.

If you’re banging out behemoth methods with control statements eight deep, all of the camel case in the world isn’t going to make your code readable. A standard mandating that all such methods be prepended with “yuck” might help, but the real thing that you need is some standards about writing clean code. Keeping methods and classes small and focused, principles like DRY and SOLID, and other good design principles are much more important standards to which to aspire, but they’re often less concrete and harder to enforce. It’s much easier and more rote for a code reviewer to look for casing issues or missing comments than to analyze code for good software practice and object-oriented design. The latter is often less cut-and-dry and more a matter of degrees, so it’s frequently glossed over in favor of more tangible, simple things. Problem is, those tangible, simple things really aren’t all that important to the health of your applications and projects over the long haul.

It’s All Just Premature Optimization

The common thread here is that all of these standards anti-patterns result from solving non-existent problems. If you have some collection of half-baked standards at your company that go on for some pages and then say, “after that, follow the Microsoft standards,” imagine how they came about. I bet a few of the group’s original engineers or most senior people had a conversation that went something like, “We should probably have some standards.” “Yeah, I guess… but why now?” “I dunno… I think it’s, like, what you’re supposed to do.”

I suspect that if you did a survey, a lot more standards documents have started with conversations like that than with conversations about hours lost to maintenance and difficulty reading code. They are born out of cargo-cult practice rather than a necessity to solve some problem. Philosophically, they start as solutions in search of a problem rather than solutions to actual problems.

The situation is complicated by the fact that adoption of certain standards may have solved real problems in the past for developers on the team, and they’re simply doing the smart thing and carrying their knowledge forward. The trouble is that not all projects face the same problems. When discussing approaches, start with abstract and general abiding principles like SOLID and DRY and take it from there. If half of your team uses camel case and the other half Pascal and it’s causing communication and maintenance difficulties, flip a coin and set a standard. Repeat as necessary with other standards to keep the project moving and humming. But don’t make them up just for the sake of doing so. You wouldn’t start writing random code that may never solve any actual problem, so why create a standard that way?