DaedTech

Stories about Software

By

URL Scheme Changed

Just a quick note here. I decided over the weekend to change the URL scheme to remove the query strings from the URLs. I do lose the +1/Like/Tweet/etc counts in the social media plugin, but that’s the only thing that should be affected. The old URLs with query strings and post IDs will redirect to the new URL scheme, so anyone that has linked here or any trackbacks that appear should still be valid.

If anyone reading has an issue along these lines, please leave a comment here or drop me an email, and I’ll see about getting it straightened out.

By

How to Lose Friends and Alienate Programmers

The Road to Madness is Paved with Lying APIs

Today, I’d like to discuss the subject of little things that add up to making your code a chore to work with for others. I’m not talking about things like code that’s incomplete or non-functional, or other obvious problems, but rather subtle, sort of insidious ones. The obvious problems result in you going to the person in question and demanding a fix or else simply rolling your own version instead. The insidious ones result in hidden wasted time, undue frustration, and the need to head down to the local pub to take the edge off after a long, annoying day. Unlike broken/missing/whatever code, these are not immediately obvious, and they can drive you nuts by making it unclear where the problem really lies.

A few posts ago, I mentioned a piece of code that contained an API with a quirky method call:

public void DoSomething(bool shouldPerform)
{
   if(shouldPerform)
   {
       //A whole bunch of processing
   }
}

I changed the name of the parameter there for the sake of illustration in that post. Below, I’ll show a more close approximation of the actual API at play, and leave out the implementation details. The code in question is designed to implement undo and redo functionality. As a client of the service, you log an object and the name of a property that you want to record, and the service encapsulates storing those values, along with the ability to do conceptual undo and redo operations.

Here is more or less what the API involved (modified to some degree for the sake of distilling the issues I’ll be discussing and to understand what the parameters are actually supposed to do – their actual names don’t make this clear, which goes to the point of this post in and of itself):


public bool StartTransaction(string transactionName);
public bool StartTransaction(string transactionName, object record, string recordProperty)
public void EndTransaction(bool isTransactionStarted);
public void Record(object record, string recordProperty);
public void Undo();
public void Redo();

(There are a bunch more overloads, but those aren’t really relevant to what I’m going to post here about this particular API).

When I first interact with a piece of code as a client, my inclination is to trust the author of the code. So, if I’m using a service interface that says GetFooById(int id), I don’t go inspecting the internals of the implementations of that interface — I assume that it’s going to get me a Foo, given the Foo Id. I only start inspecting the internals if there is unexpected behavior or a problem in general.

I don’t think that’s an unreasonable policy. Perhaps it’s a bit generous at times, but I think that if more people made this assumption, more people would code to that standard, and more would get done.

With this code, what I wanted to do was write a facade for it, since this was implemented not as an actual interface, but as a hybrid of static/singleton/monostate, where some of the methods were static and some were instance methods, but all operated on the same static/global state. I didn’t want my service and presentation tier classes interacting with this directly, so I wrapped the functionality in an implementation and had it implement an interface for the Start/End transaction and Record() bit.

So, I set about coding my facade and letting that inform the particulars of the interface, and then I coded the rest of the classes against my interface so that I could proceed in TDD fashion. That way, the only untested thing when integration time came was the actual facade implementation. So, when it came time to integrate, I fired up the application and ran it, and… everything went wrong.

Thinking back on this a few days later with some perspective, I realized that my interaction with and troubleshooting of my facade created a “teachable moment” of sorts, for myself, and hopefully in general. The problem lay in my misunderstanding of the implementation (which does work in production, if not in a way that is obvious at all through the API), but the problem also lay, subtly, in what I had to do in order to gain the proper understanding. The first part is on me, the second part is on the service author. As to whether I should have taken the API at face value or not, I’d say that’s a matter of individual opinion.

Parameters that Don’t Matter

After getting all of my tests to pass, firing up the application to test the integration, and experiencing disaster, the first thing I did was spend a good bit of time going over my assumptions for my new code. After all, my code was new, and the service code was pre-existing and functional. After wasting a good bit of time eliminating this as a possible issue, I turned next to my facade and its interaction with the service API. Everything there seemed reasonable as well, so I resigned myself to having to examine the implementation details of the service in order to understand what it was actually doing.

The first thing that I discovered was that there were lots of overloads, both public and private, and the result of this was a deep stack between the API and the actual execution code. It took me a while, but I wrote down the various parameters and traced them all the way through to their eventual execution locations. At this point, I had to double and triple check my work because I discovered that, when calling StartTransaction(string transactionName), it made absolutely no difference what I passed in. This was my first moment of sympathy for Charlie Brown, as shown above.

That’s not to say that the parameter transactionName wasn’t used. It sure was. It was passed all over the place, and it was set in a field in one of the classes of the service. But, that field was never operated on. Not once. The method was demanding a parameter of me that was used, but irrelevant. This is substantially worse than unused, which CodeRush would have shown me immediately. Instead, I had to dig through the bowels of an existing service to discover that the parameter was used in the sense of being read and stored, but not used in the sense of being useful.

This is one of the little things I mentioned that sows the seeds of mistrust and creates frustration. I based my implementation on the notion that the service was associating individual transactions with their names, since this parameter was non-optional. I thus created a class level constant for my registering my particular transactions, which turned out to be a useless activity. Not a big deal, but some time wasted, which creates frustration.

Unnecessary Behaviors Forced On Clients

Another thing that I noticed in my closer examination of the service implementation’s internals was the method I blogged about earlier, EndTransaction(bool). When I first noticed the API, I chuckled a bit, but after working with it and then digging into it, I came to better understand what was actually going on here. It wasn’t simply that I had to pass the method a “true” if I wanted it to execute. The client class expected me to keep track of whether it had started a transaction or not, and to pass that state flag back to it.

This is similar in concept to an API that demonstrates a temporal dependency by forcing you to use the first method’s return value as a parameter to the second message, but with a subtle and important difference. In a method like that, the return value of the first method tends to be a by-product of the operation going on in the first method. Say, for instance, that I have an API that has a method “int SpawnProcess()” and another method “void KillProcess(int)”. This informs me that SpawnProcess spawns a process and returns a means of identifying that process, and the end method forces me to pass the int back so that it knows which process to kill. Getting it right isn’t a fifty/fifty shot as it is with a boolean — the API tells me that the EndProcess(int) expects a specific value that I should somehow know about.

The subtlety lies in the nuance of what the API is saying. The API I just described says “hey, I start a lot of processes, so I’ll start one and tell you about it, but you need to keep track of the id of your process”. The API with the boolean says “hey, I’ll tell you whether or not starting this transaction succeeded, but I’m too lazy to keep track of it myself, so I need you to tell me later.” That is, the first one has a good reason (practical infeasability without creating a context depeendency) for not keeping track of that id, whereas the second one doesn’t. EndTransaction(bool) implies that there are only two possible states. Why is it my responsibility as a client to keep track of the service’s encapsulated state for it? You can’t expose a property, or since passing in false is a no-op, just encapsulate that state flag and no-op?

The reasons I dislike this are layered. First of all, it’s counter-intuitive. Secondly, it leaks the implementation details of the service. As a client, I not only know that I start and end transactions, which I should know, but also that transaction start and end are global, static flags. Thirdly, it foists its implementation on me and tries to force me to partner with it in leaking implementation details. That is, it doesn’t just leak the details — it forces me as a client to do its internal dirty work of state management.

Getting the Metaphor Slightly Wrong

The two previous items made me pause and mildly annoyed me, but were relatively easy to figure out and correct within my facade. I had (understandably, I think) misinterpreted the API and was relatively quickly able to correct this (by passing in string.empty for the transaction name because it didn’t matter, and by keeping track of the state of the service in the facade to prevent exceptions). This third problem was the one that caused me a few additional wasted hours of debugging and squinting at service code.

If I look at the API, I’m sure I’m not alone in thinking of a metaphor of a database. The “transaction” nomenclature practically screams it. And, in the database world, I can perform operations one at a time or I can start a transaction, perform a batch of them, and then commit or rollback my batch atomically. In spite of the fact that I saw “End” instead of “Commit” or “Rollback” (and, in retrospect, this should have been a red flag), I assumed that behavior here was the same.

So, in my facade, I implemented a scheme where normal operations were recorded sans transaction and I started/ended transactions in situations where one value changing automatically changed another. To me, this was an undo “transaction”, since I wanted those to be rolled back with a single “Undo/Ctrl-Z”. Well, it turns out I was partially right in my assumption. Transactions were, in fact, atomic in nature. But, after a lot of debugging and squinting, I found that transactions were not actually optional. Every Record() operation required a transaction to be started and stopped. The metaphor was just enough like a database to create the expectation that the abstraction would mirror its behavior, but not enough like a database API where that actually worked. It would sometimes throw exceptions about a transaction and sometimes simply do nothing.

Okay, fair enough, I bracketed every Record operation with a transaction, and things were actually kind of working in the application at this point. Operations would undo and redo, except that I had to hit undo at least twice to undo every one thing I did. Now for this, I couldn’t rely on the exception debugging or anything like that to point me in the right direction, so I was really immersed in the debugger and code to figure this one out (and it’s a doozy). If you look at the two public StartTransaction() overloads, one appears to start a transaction and the other appears to start a transaction and record the first entry of the transaction. This struck me as very odd until I realized that all Records() had to occur inside of a transaction, and then I figured it was probably tacked on shorthand to prevent endless sequences of Start(); Record(); End(); for a single record.

What was really going on under the hood was that both Start() overloads invoked a common third, private overload, and passed in all of their parameters plus some. The first invocation passed in nulls for the recording target object and the property value, while the second overload passed in what you gave it. In both cases, the values were recorded in a transaction, but in the case of the Start(string) one, it recorded a pair of nulls in the stack. That’s right. The second overload wasn’t a convenience — it was the only way not to be wrong. The first method call was always wrong, by definition. So, StartTransaction(string) should really have been called StartTransactionAndAddSpuriousRecord(string) and the second overload should have been called StartTransactionAndRecordActualRecord(string, object, string). Or, better yet, the first one shouldn’t have existed at all.

In the book Code Complete, Steve McConnell discusses the importance of forming consistent abstractions:

Provide services in pairs with their opposites. Most operations have corresponding, equal and opposite operations. If you have an operation that turns a light on, you’ll probably need one to turn it off. If you have an operation to add items to a list, you’ll probably need one to delete an item from the list…. … When you design a class, check each public routine to determine whether you need its complement.

null

In the case here, we don’t have Start() and End(), but StartAndDoOtherStuff(), DoMoreOfThatOtherStuff() and End(). These are not complementary abstractions — they’re confusing, overlapping abstractions. Imagine working in C# with a list with this API:

  • List(T firstItem)
  • Add(T item)
  • Clear()

If you want an empty list, what do you think you do? Create a new list? Pshaw. Create a list, and then clear it, of course, since creating a list with no elements is impossible for some reason. Now, imagine if Microsoft pulled a switcheroo on you and foisted this new list API on you after years of working with the list you are familiar with it. My bet is that you’d feel a little like poor Charlie Brown.

Am I Overreacting?

Yes and no. The fact that I’m blogging about it probably implies that I’m more frustrated than I actually am. I don’t blame anyone for writing this code, per se, and I’m sure I’ve written code at times that’s made people pull their hair out as well — we’re all guilty of it at some time or another. An interface that seems intuitive to the author may make no sense to someone else.

Now, don’t get me wrong. I was frustrated at the time. But, I filed that frustration away, and took the opportunity instead to analyze exactly what went wrong. What we had there was a failure to communicate…

I tried to consider objectively my role in it versus the role of the service. I think that my role in it was to make assumptions without verifying them, based on the advertised API, as I mentioned in the lead-in. But, I think that it’s incumbent on green field developers and maintenance programmers alike to pull back and consider the abstractions. Are you forcing clients to pass in parameters that you don’t need? Do your functions do what they say they’re going to do? Do your functions force unnecessary behavior on clients? Do your functions misrepresent a common abstraction? If the answer to these questions is, or even may be, “yes”, it’s probably time to rework the public API for clarity.

Whoever is to blame in the matter, the fact is that I spent some hours debugging something, where I would have spent none doing so had the interface worked the way that I assumed it worked by inspecting it (once I had ferreted out my incorrect assumptions, everything worked flawlessly and my TDD tests and other unit tests proved to be correct, given my own facade API). That is frustrating and wasteful. I feel somewhat vindicated by the fact that I would have wasted zero time if the API had worked in a manner consistent with itself, whereas with the state it was in, I would have had to read/debug the internals whether I assumed anything or not, given that the API was not consistent with the actual behavior of the implementation.

So, the point of all this is that an API that does not faithfully represent the implementation is going to cause frustration. If you’re a developer working on an API that somebody else will use, do both of you a favor and look for these types of issues and any others you can think of. Don’t code up a class where some other class you’re writing at the same time is the only client. That’s a recipe for clunky interfaces and seams. And, please, write some tests, TDD tests, unit tests, whatever. These force you to use your API in another context. If the code in question here had been unit tested, I find it unlikely that it would have had its public API left in this state. Bearing these things in mind will keep your stock high with fellow programmers and have them trusting code that you write, instead of knowing that every time they try to kick an extra point, you’re going to rip the ball away and cause them to fall flat on their backs.

By

Chain of Responsibility

Quick Information/Overview

Pattern Type Behavioral
Applicable Language/Framework Agnostic OOP
Pattern Source Gang of Four
Difficulty Easy

Up Front Definitions

N/A

The Problem

Let’s say that you’re doing something reasonably standard. You have some desktop application that deals with a persistence structure (file, web service, remote database, whatever) and displays things for the user. Since it’s not 1996 anymore, you don’t want to treat the user to an endless series of dialog boxes for feedback, so you decide to create some kind of status message scheme, whereby there is a piece of screen real estate that is updated with messages as processing occurs, to give the user feedback. Think of an install wizard:

You get some requirements that you’re to have three types of status messages: informational, warning, and error, to be displayed in black, yellow and red text, respectively. From your extensive experience as a developer, you know that business analysts and users love nothing more than to tinker with the GUI, so you decide that you need a better scheme than simply displaying text and changing its color — you need to encapsulate the message generation logic so that when it is decided that the error messages should really be more of a brick red, you don’t have to do a giant find and replace in your presentation layer. So, you create the following object to which the GUI will bind:

    public class Message
    {
        public virtual string Text { get; set; }
        public Color Color { get; set; }
        public FontWeight Font { get; set; }
    }

Further, you create some classes that understand how to populate these bindable objects:

    public abstract class MessageGenerator
    {
        public const int LowPriority = 0;
        public const int Normal = 1;
        public static int Warning = 2;
        public static int Error = 3;
        public static int Critical = 4;

        public abstract Message GenerateMessage(string messageText);
    }

    public class InformationalMessageGenerator : MessageGenerator
    {
        public override Message GenerateMessage(string messageText)
        {
            return new Message() { Color = Colors.Black, Font = FontWeights.Normal, Text = messageText };
        }
    }

    public class WarningMessageGenerator : MessageGenerator
    {
        public override Message GenerateMessage(string messageText)
        {
            return new Message() { Color = Colors.Yellow, Font = FontWeights.SemiBold, Text = messageText };
        }
    }

    public class ErrorMessageGenerator : MessageGenerator
    {
        public override Message GenerateMessage(string messageText)
        {
            return new Message() { Color = Colors.Red, Font = FontWeights.Bold, Text = messageText };
        }
    }

Now, when the error message needs to be a little lighter than brick red, but a little darker than normal red, and maybe with a hint of purple, you can go to one place, the ErrorMessageGenerator, and you’re done. This is good, except that generating these things will be a little tedious. You’re going to have to instantiate the specific generator that you want and use it to generate your message. So, all over the code, you’re going to have things like this:

    public class CustomerViewModel : INotifyPropertyChanged
    {
        private CustomerService _customerService;

        private Message _statusMessage;
        public Message StatusMessage { get { return _statusMessage; } set { _statusMessage = value; RaisePropertyChanged("StatusMessage"); } }

        public void DoSomething()
        {
            var myCustomer = _customerService.GetCustomerById(12);
            if (myCustomer == null) //Oops, let's tell the GUI about this
            {
                var myGenerator = new ErrorMessageGenerator();
                StatusMessage = myGenerator.GenerateMessage("Customer not found!");
            }
            else
            {
                var myGenerator = new InformationalMessageGenerator();
                StatusMessage = myGenerator.GenerateMessage("Located customer 12.");
            }
        }
....

Generating those objects is sort of annoying, and it also defeats the purpose of having an abstract base class. You could always factor the generation into some view model base, but you don’t get around the problem of having to tell the new method what kind of error generation you want, but that only works for view models that inherit from a base view model. Another option would be to create a factory method or instance that generated the appropriate generator… but, generating generators seems like overkill here. It would also be sort of redundant and wrong feeling to do that, as the information about how to generate messages is all stored inside of the MessageGenerator hierarchy, but you’d be introducing some external class to figure out how these things are wired. So, you decide to leave well enough alone and plow ahead with this not quite elegant, but good enough implementation.

Now, let’s say some time goes by, and your message coloring scheme is such a hit that the users/BAs want every color in the rainbow and even some ultra-violet and infra-red messages. I mean, there are messages that are critical, and then messages that are CRITICAL. The latter should be in red, but with hints of white, to evoke images of a hospital. And, there are really several different kinds of warnings – those that you can ignore, and those that are almost errors, which should be an angry orange. There should also be messages that are green to say “Good job, user!”

You’re glad that your code is such a hit, but dismayed to see your “DoSomething()” method and all those like it ballooning in size like there’s no tomorrow:

        public void DoSomething()
        {
            var myCustomer = _customerService.GetCustomerById(12);
            if (myCustomer == null) //Oops, let's tell the GUI about this
            {
                var myGenerator = new ErrorMessageGenerator();
                StatusMessage = myGenerator.GenerateMessage("Customer not found!");
            }
            else if (myCustomer.IsValid == false)
            {
                var myGenerator = new WarningMessageGenerator();
                StatusMessage = myGenerator.GenerateMessage("Customer not in valid state!");
            }
            else if (PositiveFeedback)
            {
                var myGenerator = new FeelGoodMessageGenerator();
                StatusMessage = myGenerator.GenerateMessage("Way to go!");
            }
            //...
            else
            {
                var myGenerator = new InformationalMessageGenerator();
                StatusMessage = myGenerator.GenerateMessage("Located customer 12.");
            }
        }

Now, since this is GUI code, some of this ballooning is unavoidable – if there are 20 different things that can be displayed for customer retrieval, you’re going to need at least 20 lines for 20 strings. You might rethink the factory method at this point, for generating the generators or even the messages, thus saving yourself some lines of code, but it won’t be a huge gain in terms of the complexity of these methods. So, you forget the factory method and resign yourself to this complexity.

But, let’s say that a new wrinkle comes along. Some of your users think that the BAs got a little carried away with all of the colors, and they just want the old red/yellow/black scheme back. Others love the colors. So, you’re tasked with coming up with a way to please both sets of users. There is a “rich colors” setting in a config file that you’ll read in and, if that’s enabled, you’ll display all of the additional colors, but if not, then the first three.

Now, things get away from the realm of “unavoidable complexity for displaying lots of information to the user” and into “downright hideous” territory:

        public void DoSomething()
        {
            var myCustomer = _customerService.GetCustomerById(12);
            if (myCustomer == null) //Oops, let's tell the GUI about this
            {
                var myGenerator = new ErrorMessageGenerator();
                StatusMessage = myGenerator.GenerateMessage("Customer not found!");
            }
            else if (myCustomer.IsValid == false)
            {
                var myGenerator = new WarningMessageGenerator();
                StatusMessage = myGenerator.GenerateMessage("Customer not in valid state!");
            }
            else if (myCustomer.IsSemiValid)
            {
                if (RichColors)
                {
                    var myGenerator = new WishyWashyMessageGenerator();
                    StatusMessage = myGenerator.GenerateMessage("Well, you're almost there!");
                }
                else
                {
                    var myGenerator = new InformationalMessageGenerator();
                    StatusMessage = myGenerator.GenerateMessage("Well, you're almost there!");
                }
            }
            else if (PositiveFeedback)
            {
                if (RichColors)
                {
                    var myGenerator = new FeelGoodMessageGenerator();
                    StatusMessage = myGenerator.GenerateMessage("Way to go!");
                }
                else
                {
                    var myGenerator = new InformationalMessageGenerator();
                    StatusMessage = myGenerator.GenerateMessage("Way to go!");
                }
            }
            //...
            else
            {
                var myGenerator = new InformationalMessageGenerator();
                StatusMessage = myGenerator.GenerateMessage("Located customer 12.");
            }
        }

Now, you’re duplicating code in a big way. Using a factory method here could stop the bleeding somewhat, but not completely. You’ll see why when you learn what the BAs have in store for you next. When you were able to shoehorn in the selective color scheme from the last release, they got the taste of your blood in their mouths and they liked it.

You see, customers in Miami like teal messages because it’s one of the Dolphins’ colors, while males over 50 seem to be partial to muted shades of gray, and we also need a universal beige when the application is in safe mode. So, the application now needs to be able to read in all of those settings and selectively enable disable those message generators. Now, even your factory method solution is convoluted and nasty. It needs to be in a class with a bunch of booleans or take those booleans as parameters. It may buy you a little reprieve in lines of code, but it won’t buy you any help with your ominously growing complexity. Abandon all hope, ye who enter the presentation tier.

… well, okay, before you start dusting off the resume or pitching version 2.0 to management, there is another way, if you’re up for a bit of refactoring.

So, What to Do?

This is where the Chain of Responsibility pattern enters the picture. If you pull back for a moment and think of what really needs to happen here, it isn’t simply a matter of picking a message generator implementation to use. It’s a matter of prioritizing the message creators. What you’ve really got is a set of rules like “If X and Y, use creator A, otherwise, if Z, use creator B, etc”. You’ve been successful at not duplicating the actual construction of the message by using the generator polymorphs, but you are duplicating the prioritization logic.

So, first things first. Let’s introduce a concept of message priority to the creation method, since it’s already sort of defined in the base class anyway:

    public abstract class MessageGenerator
    {
        public const int LowPriority = 0;
        public const int Normal = 1;
        public static int Warning = 2;
        public static int Error = 3;
        public static int Critical = 4;

        public abstract Message GenerateMessage(string messageText, int severity);
    }

Now, each implementer has additional information that it can use to decide whether or not to bother processing a message:


    public class InformationalMessageGenerator : MessageGenerator
    {
        public override Message GenerateMessage(string messageText, int severity)
        {
            if (severity == Normal)
            {
                return new Message() { Color = Colors.Black, Font = FontWeights.Normal, Text = messageText };
            }
            return null;
        }
    }

Now, the scheme is a bit different. Each generator may or may not return a message. This doesn’t exactly help you, but it’s interesting. You’ve introduced the notion that a generator can examine a passed in parameter and then choose or decline to generate a message. To fully implement Chain of Responsibility, and solve the problem, it is now necessary to teach your generators how to pass the buck.

The idea is to give your generators a sense of “I’ll handle this message if it’s appropriate, and if it’s not, the next guy will handle it.” This, combined with some heuristics for which generators handle which priorities will move a lot of those unsightly if conditions out of your code. So, let’s start with the base class:

    public abstract class MessageGenerator
    {
        public const int LowPriority = 0;
        public const int Normal = 1;
        public static int Warning = 2;
        public static int Error = 3;
        public static int Critical = 4;

        public MessageGenerator Next { get; set; }

        public virtual Message GenerateMessage(string messageText, int severity)
        {
            return Next != null ? Next.GenerateMessage(messageText, severity) : null;
        }
    }

Here, you’ve defined a concept of next generator that can be set by clients. The formerly abstract method is now virtual to encapsulate the buck passing so that clients don’t have to do it. By default, you’ll get buck passing, and you now have to opt in to handling the message yourself. The clients will now look as follows:


    public class InformationalMessageGenerator : MessageGenerator
    {
        public override Message GenerateMessage(string messageText, int severity)
        {
            if (severity == Normal)
            {
                return new Message() { Color = Colors.Black, Font = FontWeights.Normal, Text = messageText };
            }
            return base.GenerateMessage(messageText, severity);
        }
    }

Now your code is looking promising for saving you a lot of bloat. The new message generator examines a message, handles the generation if the severity parameter is appropriate and passes if not. The real slick part of this is that the burden of message generation is still removed from the client, but so too can the burden of prioritizing message generation handling. Here’s what the new client code looks like:


    public class CustomerViewModel : INotifyPropertyChanged
    {
        private CustomerService _customerService;

        private MessageGenerator _messageGenerator;

        public CustomerViewModel(CustomerService customerService, MessageGenerator messageGenerator)
        {
            _customerService = customerService;
            _messageGenerator = messageGenerator;
        }

        private Message _statusMessage = null;
        public Message StatusMessage { get { return _statusMessage; } set { _statusMessage = value; RaisePropertyChanged("StatusMessage"); } }

        public void DoSomething()
        {
            var myCustomer = _customerService.GetCustomerById(12);
            if (myCustomer == null) //Oops, let's tell the GUI about this
            {
                StatusMessage = _messageGenerator.GenerateMessage("Customer not found!", MessageGenerator.Error);
            }
            else if (myCustomer.IsValid == false)
            {
                StatusMessage = _messageGenerator.GenerateMessage("Customer is not in valid state!", MessageGenerator.Warning);
            }
            else if (myCustomer.IsSemiValid)
            {
                StatusMessage = _messageGenerator.GenerateMessage("Well, you're almost there!", MessageGenerator.WishyWashy);
            }
            else if (PositiveFeedback)
            {
                StatusMessage = _messageGenerator.GenerateMessage("Way to go!", MessageGenerator.FeelGood);
            }
            //...
            else
            {
                StatusMessage = _messageGenerator.GenerateMessage("Located customer 12.", MessageGenerator.Normal);
            }
        }
    ...

Wow, that looks much better. One condition, one message — the cyclomatic complexity is now order linear with the actual complexity of the conditions that need to be reported, instead of condition complexity and business analyst imagination complexity multiplied. And, as a nice bonus, you’re not instantiating generators anymore – you’ve got only one that you use, and it’s injected as a dependency. Furthermore, this is accomplished without any factory method. Going back to the original example (error/warning/info), here’s the wireup for it:


    public class Wireup
    {
        public MessageGenerator BuildMessageGeneratorChain()
        {
            return new ErrorMessageGenerator() 
            { 
                Next = new WarningMessageGenerator()
                { 
                    Next = new InformationalMessageGenerator() 
                    {
                        Next = new DefaultMessageGenerator() 
                    } 
                }
            };
        }
    }

(Sorry about the nested conditional initializers — it’s late, and for some reason that amused me a lot more than it should have. It’s like some kind of code ugly-chic.) So, in this simple wireup, you have the chain of responsibility setup with some DefualtMessageGenerator defined to handle all messages not handled elsewhere. I won’t bother picturing it, but this would simply return a message every time, should the responsibility be passed to it. This would ensure that the message was always at least generated, if not with colors and fonts and whatnot.

Now, you’ve really separated responsibilities. The ViewModel and other clients don’t need to worry about message generation – it’s now a black box to them. They have a generator and the generator gives them messages from their passed in string/severity pairs. They have a “Bob’s your uncle” attitude toward the generator’s chain of delegation. And now, when the business analysts get crazy with the colors, you have exactly one place to make changes in existing code — the wireup class (you may need to add new generators as well). This is powerful because one of the more desirable situations for a code base is one in which requirements changes mean adding new code instead of modifying existing code. You can’t introduce regressions to brand new code, after all.

The only exception to not modifying existing code may be a need to periodically revisit the priorities. For instance, if every color in the rainbow has been added to the code base, you may have your default black/yellow/red handle priorities 0-10, 11-20, and 21-30 respectively, but giving the custom colors a bite at the apple first. That allows a teal message of priority 4 to be handled with TealGenerator for Miami customers, but the basic generator for all others simply by omitting the teal generator for the responsibility chain for non-Miami customers. But, if you have this scenario and the 31st color is introduced, you may need to expand the scope of the defaults and shuffle the priorities a bit.

Of course, other schemes are possible too. The generate method could take on another parameter, or you could use a different numbering scheme. The important part is that you’re creating multiple objects that can handle a request using different criteria, with easily customizable ordering scheme.

A More Official Explanation

The original Gang of Four definition of the pattern was:

Avoid coupling the sender of a request to its receiver by giving more than one object a chance to handle the request. Chain the receiving objects and pass the request along the chain until an object handles it.

The algorithm in question calls for a recursive handling of requests. A scheme is setup whereby a client invokes some service to satisfy a request, and the service does so by walking a pre-determined graph of potential handlers until one agrees to service the request. The aforementioned decoupling occurs because the client is oblivious to which potential servicer will actually handle it.

Polymorphism is a convenient mechanism for this paradigm but certainly not the only one. The only requirement is a customizable chain of handlers be given the chance to do so, and given that chance in response to a single request from the client.

Other Quick Examples

Here are some other scenarios that seem as though they’d lend itself well to Chain of Responsibility:

  1. Event handling in the a GUI context (WPF works this way, for instance, with routed events). A control can raise an event that will bubble up the logical tree until some other control handles it.
  2. Modeling a situation with approvers or handlers. For instance, consider a software scheme for managing vacation/PTO requests at a business. Approvers might include managers, VPs, etc. Requests may be handled by person X unless person X is out of office, and then they are handled by person Y, etc.
  3. In a sense, exception handling works this way — code generates an exception, and each layer of the call stack (either explicitly or implicitly) opts to handle the exception or let it bubble up.

A Good Fit – When to Use

This pattern is good to use any time the scheme for handling a request is somewhat involved and should be configurable on the fly. In the example in this post, the pattern wasn’t really important until the scheme for message handling started to get pretty customized and involved. Generally, when this is happening, Chain of Responsibility should enter the discussion.

Likewise, if there is a situation where multiple parties should have input to a request/event, the pattern may be a good idea. With a bit of modification, the example above could be used to generate composite messages (for instance, one message generator could take the message returned by another and make it bold).

Square Peg, Round Hole – When Not to Use

As always, there is the YAGNI princple. Jeff Atwood says

Don’t use fancy OOP features just because you can. Use fancy OOP features because they have specific, demonstrable benefit to the problem you’re trying to solve. You laugh, but like Rico, I see this all the time. Most programmers never met an object they didn’t like. I think it should be the other way around: these techniques are guilty until proven innocent in the court of KISS.

You could substitute “Design Patterns” for “fancy OOP features” here and the message would remain valid. In the example here, if all messages were black and with normal font, it wouldn’t make any sense to use Chain of Responsibility just in case the BAs wanted other colors. Wait until they add one or two before jumping to that conclusion and implementing this pattern. (I have a rule of thumb, which is to wait for at least two and usually three similar requests before deeming the request a pattern and adjusting accordingly. But, as soon as you hit three, do a 180 and bank on more such requests).

In addition, this pattern may not make sense if a rigid ordering is acceptable. For example, if you’re modeling a department structure, and all requests are always approved by a manager and then the VP, it may not be necessary to line up the handlers just yet. If that is unlikely ever to change, there’s no need for a complex design pattern where a simple && will suffice.

Finally, don’t confuse this with assembly. If you’re modeling the construction of a house, you don’t want a “Chain of Responsibility” where one handler builds the basement, and passes it on, the next adds a ground floor and passes it on, etc. That’s an inappropriate metaphor and it will be confusing in the long run. The house construction has a rigid ordering and isn’t in response to anything. This is a creational situation, and Chain of Responsibility is a behavioral pattern. I realize that we were, in the example, ‘constructing’ a message, but the purpose of that construction is to figure out who would build the message — not to partition the message’s construction into different spheres of responsibility (there’s another pattern for that).

So What? Why is this Better?

Chain of Responsibility is an improvement where appropriate because it introduces flexibility. It separates the handling of an event or servicing of request from the place where the event/request is generated. The requester doesn’t need to take care of this itself — it can delegate that. In the example above, the View Model class is responsible for displaying information to the user. But, before implementing the pattern, the View Model was developing a greater and greater responsibility for figuring out how to construct a message. By delegating that responsibility elsewhere, the class was simplified. By using Chain of Responsibility to do it, the whole application was made more flexible by virtue of the fact that the scheme for message construction was made easily swappable on the fly.

This pattern is not a hard sell in terms of utility, given that it’s already widely in use within frameworks, such as in the aforementioned cases of GUI events in WPF and exceptions in general.

By

You’re Doin’ It Wrong

I was chatting with someone the other day about some code that he had written, and he expressed that he wished he had done it another way. He said that there was a bunch of duplication and that, while he wasn’t sure what he should have done, he didn’t think it seemed quite right. I took a look at the code and was inclined to agree, though I could think of how I might have approached it.

This led me to think a little more broadly about writing code that you feel not quite right about. One might simply call this a “code smell”, but I think a distinction can be made. I personally associate code smells more with code from which you are somewhat removed, either because someone else wrote it or because you wrote it some time back. That is, you wouldn’t write code that “smelled” to you if it smelled as you were writing it (hopefully).

So, what I’m documenting here is not so much ways to look at code and “smell” that something is wrong, but rather a series of heuristics to keep in mind during development to know that you’re probably doing something wrong. Code smells are more macroscopic in nature with things like “parallel inheritance hierarchy” and “anemic domain model”. These are practices that, as I’ve learned from experience, seem to lead to bad code.

Typing the same thing again and again

The incident that made me think of this involved a piece of code that looked something like this:

public class SomeViewModel
{
    int CustomerId1 { get; set; }
    string CustomerName1 { get; set; }
    int CustomerId2 { get; set; }
    string CustomerName2 { get; set; }
    int CustomerId3 { get; set; }
    string CustomerName3 {get; set; }
    ....
    
    public void GetCustomers()
    {
        CustomerId1 = _SomeService.GetCustomer(1).Id;
        CustomerName1 = _SomeService.GetCustomer(1).Name;
        ...
     }

     ....
}

Now, there are a few things going on here that could certainly be improved upon, but the first one that jumps out to me is that if you’re appending numbers to fields or properties in class, you’re probably doing something wrong. We could certainly take our IDs and names and turn them into arrays, or lists or collections and iterate over them. But, even doing that, you’d have a bunch of parallel loops:

public class SomeViewModel
{
    List CustomerIds { get; set; }
    List CustomerNames { get; set; }
    
    public void GetCustomers()
    {
        CustomerIds = _SomeService.GetCustomers().Select(c => c.Id);
        CustomerNames = _SomeService.GetCustomers()Select(c => c.Name);
     }
}

Each place that you want to do something with customer IDs, you’ll also have to do it with names. This gets particularly ugly if you decide you want to sort the lists or something like that — now you have to make sure the lists are rearranged in order – you take on responsibility for guaranteeing that your multiple list implementation remains consistent. This is repetitive still. Better to create a customer model struct with properties name and ID or something, and expose a list (or observable collection or whatever) of that.

The heuristic I can offer here is that if you’re typing the same thing over and over again, you’re probably doing it wrong. This is an offshoot of the DRY (Don’t Repeat Yourself) principle, but ironically, I think it bears some repeating here. DRY doesn’t simply mean that you shouldn’t copy and paste code, which is what many people remember. It doesn’t even mean that you shouldn’t duplicate code. Dave Thomas calls it something “much grander”:

DRY says that every piece of system knowledge should have one authoritative, unambiguous representation. Every piece of knowledge in the development of something should have a single representation. A system’s knowledge is far broader than just its code. It refers to database schemas, test plans, the build system, even documentation.

There should be no duplication of knowledge in the project, period. As such, typing the same thing over and over again has the whiff of duplication, even if you aren’t copy/pasting. If you perform a series of operations on customer ID 1 and then customer name 1, and then customer ID 2, and then customer Name 2, etc, you’re repeating that operation over and over again, albeit with a nominally different target. But, what if in the database or lower layers of memory, I switch customers 1 and 2? The 1 operation is then done on 2 and vice versa. These operations are interchangeable – we have duplicated knowledge. And, duplicated knowledge eventually leads to things getting out of sync as the system evolves, which means maintenance headaches, code rot, and defects.

Making a private member public and/or an instance method static

Let’s say that we have the following classes:

public class Customer : INotifyPropertyChanged
{
   private int _id;
   public int Id { get { return _id; } set { _id = value; NotifyChange(() => Id); }

   private string _name;
   public string Name { get { return _name; } set { _name = value; NotifyChange(() => Name); } }

  //PropertyChange Elided
}

public class CustomerProcessor
{
    private int _processedCount;

    public void ProcessCustomers(IEnumerble customers)
    {
        if(customers != null)
        {
            customers.ToList().ForEach(cust => this.DoSomethingToCustomer(c));  
            _processedCount += customers.Count();
        }
    }

    public bool IsFinished()
    {
        return _processedCount > 12;
    }
}

Now, let’s say that the GUI is bound directly to the customer object, and that a new requirement comes in – we want to display customer stuff in black, unless most of the customers have been processed (say, 3/4s of them), in which case we want to display them in red — for all customers. Let’s also assume that we have some kind of value converter that converts a boolean to red/black for binding. Here’s how someone might accomplish this:

public class Customer : INotifyPropertyChanged
{
   private int _id;
   public int Id { get { return _id; } set { _id = value; NotifyChange(() => Id); }

   private string _name;
   public string Name { get { return _name; } set { _name = value; NotifyChange(() => Name); } 

   public bool IsRed { get { return CustomerProcessor._ProcessedCount > 9; } }
}

  //PropertyChange Elided
}

public class CustomerProcessor
{
    public static int _processedCount; //This is now a public global variable

    public void ProcessCustomers(IEnumerble customers)
    {
        if(customers != null)
        {
            customers.ToList().ForEach(cust => this.DoSomethingToCustomer(c));  
            _processedCount += customers.Count();
        }
    }

    public bool IsFinished()
    {
        return _processedCount > 12;
    }
}

Don’t do that! That might be the quickest and dirtiest way to accomplish the task, but it sure is ugly. We break encapsulation by making an internal detail of the processor public, we create a global variable to get around the issue that the customer has no reference to the processor, and we’ve only created a half-way solution because the customer still has no mechanism for knowing when the processor’s count passes the magic number 9 – we’d need to expose a global event or callback so that the customer would know when to raise property changed on its color. This is also a concrete and circular coupling – CustomerProcessor and Customer now know all about one another and might as well be one static, global mashup.

Private members are private for a reason — they’re implementation details hidden from the code base at large. By having the magic number 12 and the processed count hidden in the processor, I can later change the scheme by which the processor evaluates “finished” without breaking anything else. But, the way I’ve mangled the code here, the requirement that “red” be 3/4s of the finished count is now leaked all over the place. If I go to the processor and decide that finished count is now 400 (or better yet, configurable and read from persistence), our new requirement is no longer satisfied, and as the maintenance programmer for CustomerProcessor, I have no way of knowing that (except maybe for the hint that someone exposed my private variable).

Likewise, instance members are instance members for a reason. If I later decide that I want two separate customer processors working in two separate screens, I’m now hosed with these changes. However many processors I create, if any of them process 12 records, they’re all finished.

Far too often, the reason for making privates public or instance members static is convenience and not wanting to think through dependencies. Both of those make the information that you want easier to get at. But, I submit that if this is your solution to get your objects to talk to one another, you’re likely creating a short-sighed and misguided solution. In our example above, exposing an “IsAlmostFinished()” instance member on the processor and then having the GUI bind to something that knows about the customers and their processor is a better solution. It requires a bit more planning, but you’ll be much happier in the long run.

You can’t describe a method in a simple sentence using its parameters and return value

“Too many parameters” is considered a code smell. But, how many is too many? Recently, I stumbled across a method that looked something like this:

public void DoSomething(bool shouldPerform)
{
   if(shouldPerform)
   {
       //A whole bunch of processing
   }
}

This method doesn’t have too many parameters at all, I wouldn’t think. It has exactly one parameter, which is basically the programmatic equivalent of the ubiquitous “Are you sure you want to…” message box. But, I couldn’t for the life of me summarize it in a simple sentence, and perhaps not a paragraph. It was something like “If the user passes in true, then find the hardware processor and hardware presenter, and if each of those isn’t null, find the database manager, but if the hardware presenter is null, then create a file, and…”

If you’re writing a method, do a thought exercise. If someone stopped by and said, “whatcha doin?”, could you look up at them and quickly say “I’m writing a method that…”? Or would you say, “go away and don’t break my concentration, or I’ll never remember what this thing was supposed to do”? If it’s the latter, you’re doing it wrong.

We can only process so much information at once. If you’re reading through someone else’s code or your own code from a while ago, methods that are small and focused are easy to understand and process mentally. Methods that have dozens of parameters or do dozens of things are not. It doesn’t matter how smart you are or how good at programming you are – when you encounter something like that, you’ll put your pen to the monitor and start tracing things or take that same pen and start taking notes or drawing diagrams. If you have to do that to understand what a method does, that method sucks.

Consider the following interfaces:


public interface ICustomerDao 
{
    void DeleteCustomer(Customer customer);
    void InsertCustomer(Customer customer);
    void UpdateCustomer(Customer customer);
    Customer GetCustomerById(int customerId);
    IEnumerble GetAllCustomers();
}

public interface ICustomerService
{
    bool LogCustomerDetails(Customer customer);
    Customer MergeCustomers(Customer customer1, Customer customer2);
    Customer FindCustomer(IEnumerable customerForSearch);
}

There are no method definitions at all – just signatures. But, I’d be willing to be that you can already tell me exactly what will happen when I call those methods, and I bet you can tell me in a simple, compact sentence. Wouldn’t it be nice if, when you were coding, you only encountered methods like that? Well, if you think so, do others and your future self a favor, and only define and code them like that.

Defining class behavior for one specific client

Remember that method that I showed above? The one with the “are you sure” boolean parameter? I’m pretty sure that was neither the author’s idea of a joke nor was it quite as silly as it looked. I’d wager dollars to donuts that this is what happened:


public class Unmaintainable
{
    public bool IsSomethingTrue() { ... }

    public void WayToBig()
    {
        //....
        var myExtracted = new ExtractedFromTooBig();
        myExtracted.DoSomething(IsSomethingTrue());
    }
}

public class ExtractedFromTooBig
{
    public void DoSomething(bool shouldPerform)
    {
       if(shouldPerform)
       {
           //A whole bunch of processing
       }
    }
}

When looked at in this light, it makes sense. There was some concept that a class was too large and that concerns should be separated. Taken as a gestalt, it makes sense, but simply looking at the extracted class, “DoSomething(bool)” looks pretty dumb.

While the effort to refactor to smaller units is laudable, creating an interface that only makes sense for one client is not so good. Your classes should make sense on their own, as units. If you extract a class that seems hopelessly coupled to another one, it’s probably time to reconsider the interface, or the “seam” between these two classes. In the case above, the boolean parameter need not exist. “IsSomethingTrue()” can exist in an if condition prior to invocation of the parameterless “DoSomething()” method. In general, if your class’s public interface doesn’t seem to make sense without another class, you’re probably doing it wrong. (This is frequently seen in “helper” classes, which I mentioned in this post about name smells).

By

Offering Constructive Criticism without Breaking the Creative Spirit

Wet Behind the Ears

As a child in secondary school, you are instructed to do work and have very little choice in the whys and hows of the matter. This tends to be true in college education as well, although as one progresses, some choices are left to the student. Generally, these are vetted heavily by professors or teaching assistants. This trend generally continues right on into the business world, perhaps even with a bit of regression. New, entry-level hires are usually given pretty specific instructions and trusted very little – all work is checked heavily in the majority of cases.

This comes to be commonplace enough over the course of a young life that you don’t think much of it. I know that when I came into the work force, I took it as a matter of course that this would happen and that, in the majority of cases, the people vetting my work and giving me instructions knew what they were doing and I didn’t. I was right about this.

This isn’t to say I didn’t have the childhood tendency to be constantly convinced that I was right about things. The cliche of your parents knowing nothing when you’re a high school kid and getting a lot smarter somehow over the next 10 year was true for me. But, this was really only the case in my personal life – in business and academia, I took it as a matter of course that those with more experience than me knew more than me and that I could learn from then. Again, I was right about this.

The Bulb Comes On A Little

After a while, an interesting thing started to happen. I gained enough experience and confidence after a bit of time in the workforce to actually form my own opinions. Instead of always taking it as a matter of course that reviewers and managers were right about everything, I’d start to think to myself, “I’m not sure that’s the best way to do it.” But, I’d do it anyway, and often this would result in me learning how wrong I had been. But, every now and then, it would result in my suspicions being corroborated, and I discovered that being the junior team member, person whose code was being reviewed, employee, etc, didn’t mean, ipso facto, that I was wrong and they were right.

I tend to learn from my peers very quickly and soak up knowledge from them like a sponge. Perhaps this is stubbornness and I don’t like being wrong, so I burn the midnight oil to fix my weaknesses. Perhaps it’s just that I have a good work ethic and lots of natural curiosity. Perhaps I’m a perfectionist. I don’t really know, and I don’t think it much matters. End result of this is that the times when I was right but told to do the wrong thing grew more frequent over the course of time, and that became frustrating to me. I wanted the benefit of the input I was getting, when that input was right, but not to be saddled with it when it was wrong.

The Wrong Approach

A cynical trick that I learned pretty quickly was that meetings to evaluate proposals or review work were generally of a fixed length duration. If I brought in a beautiful, well thought out proposal, some of my precious items would be critiqued, and I’d have to compromise on them. But, if I brought in the same proposal with a few obvious mistakes, the duration of the meeting would be occupied by suggestions to ‘fix’ my red herrings, and my proposal/design/code would remain intact as I had originally envisioned it.

I’m ashamed to say that this worked exactly as I intended at the time. Meetings went exactly as I planned, and I proceeded without any serious review of my concepts. I had, effectively, swung from naive assumption that others were always right to an equally naive assumption that I needed no input from anyone else. In the end, I made some mistakes that I had to put in long hours fixing and realized later that it was entirely possible someone could have anticipated the blunder that I had missed. Now that I’m too old to know everything, I understand exactly how silly this was.

How Did It Come To This?

Reflecting back on my hubris years later, I understand something explicitly now that I understood only intuitively back then. What I’ve come to understand is that some people feel that offering feedback/criticism is obligatory in certain environments, whether or not feedback is necessary or helpful. That is, if someone were to present a light switch as a simple, effective solution for turning the overhead lights on and off, these sorts of people would suggest a push button switch or The Clapper, not because it was a better solution but simply because they wanted to put in their two cents.

I suspect that there are two main causes of this. The first is that some people very much enjoy expressing opinions, and all matters are subjects for debate. It’s a game of sorts. I have friends that have this attitude, and it can be a great one for driving debates and spurring innovation. It can also be tiresome at times, but it keeps you thinking and on your toes. The other cause I see is less innate and more a manifestation of collaborative environments – people perceive this as obligatory for subtle confirmation of status. Making corrections to the work of ‘inferiors’ reminds everyone present as to their place in the food chain.

Consider a meeting where a more senior peer or a manager reviews work of a less senior peer or subordinate. If the reviewer offers nothing except a “great job”, he may feel as if he’s acknowledging superiority and thus undermining his own role. The feedback thus becomes obligatory, an end rather than a mean, which creates a situation similar in concept to a scheme where police have quotas for traffic tickets given out — it’s assumed that something is wrong, and if nothing is wrong, something must be invented.

This is complicated somewhat by the fact that there is probably something that can be improved about any suggested process, piece of code, idea or anything else, just as it’s safe to assume that there’s always somebody speeding somewhere. As such, there is always a justification for suggestions for improvement or constructive criticism. But, the fact that just about anything could be improved does not necessarily imply that these obligatory suggestions amount to actual improvements. Often, they’re simply time wasters and can even diminish the idea. This is because if the criticism is obligatory, it may not be well thought out or justified – the criticizer may be grasping at straws to assert his authority.

I believe that there are some “tells” as to when this is occurring. If a 10 page document is submitted, and the reviewer offers 2 or 3 criticisms of the first page and then says the rest is fine, this is probably an obligatory criticism. What most likely happened is that the reviewer read until he or she found a few things to express opinions about and then stopped, since the mission of reinforcing authority was accomplished. Another such tell is criticism that misses the point or some obvious fact, for obvious reasons. And then, there is vague or ad hominem criticism — suggestions that the presenter is not up to the task or hasn’t been around long enough to understand things.

To go back to my narrative, I began to see this in action intuitively, pick up on the tells, and manipulate the situation to my advantage. Perfunctory criticism can be ferreted out by inserting a few false flag mistakes early on and allowing them to be corrected so that the idea can be discussed in earnest while placating those seeking reassurance.

So What?

Having read about my own story (and probably either empathizing, as a young developer or remembering back, as a more experienced one), it is worth asking if this is a problem, or if it is simply the way of the world. To that, my answer is “both”. The practice might have some nominal use in keeping people honest and humble, but it has a side effect that outweighs that benefit, in my opinion. It promotes a culture of “tenure over merit” and amplifies the “Dead Sea Effect”, wherein talented new developers tend to leave a company quickly and less enthusiastic and ambitious developers stay around and grandfather into roles of authority.

Alex Papadimoulis also blogged about this effect and said

The reason that skilled employees quit, however, is a bit more complicated. In virtually every job, there is a peak in the overall value (the ratio of productivity to cost) that an employee brings to his company. I call this the Value Apex.

On the first minute of the first day, an employee’s value is effectively zero. As that employee becomes acquainted with his new environment and begins to apply his skills and past experiences, his value quickly grows. This growth continues exponentially while the employee masters the business domain and shares his ideas with coworkers and management.

However, once an employee shares all of his external knowledge, learns all that there is to know about the business, and applies all of his past experiences, the growth stops. That employee, in that particular job, has become all that he can be. He has reached the value apex.

If that employee continues to work in the same job, his value will start to decline. What was once “fresh new ideas that we can’t implement today” become “the same old boring suggestions that we’re never going to do”. Prior solutions to similar problems are greeted with “yeah, we worked on that project, too” or simply dismissed as “that was five years ago, and we’ve all heard the story.” This leads towards a loss of self actualization which ends up chipping away at motivation.

Skilled developers understand this. Crossing the value apex often triggers an innate “probably time for me to move on” feeling and, after a while, leads towards inevitable resentment and an overall dislike of the job. Nothing – not even a team of on-site masseuses – can assuage this loss.

Bruce Webster talks about the role of a frustrating, apparently external, management structure in bringing this phenomenon about and Alex talks about the sense of no longer being able to offer value. Obligatory reviewers short circuit and amplify the Dead Sea Effect by making the frustrating management internal to the group and by setting the “value apex” low right from the start. Talented new hires are more likely to be quickly discouraged with obligatory criticism being directed toward them to reinforce status that reviewers have and they don’t.

What to Do?

I’ve spent a considerable number of words explaining a problem as I see it and an ill-advised ‘solution’ I came up with years back and subsequently discarded, so I’ll offer some thoughts on solutions to this problem. For the shorter tenured, presenter/review-ee worker, there is no entirely self-deterministic solution, but you can improve your odds of success and minimize frustration. The most important thing to do, in my opinion, and the solution that I’ve ultimately adopted is to be extremely well organized and prepared. As you work on your proposal/design/code/etc, document not only what you do, but what other approaches you considered and discarded. If you’re writing actual code, make a list of each class and method and write down its purpose for existence. Organize and format these notes and email them out ahead of the meeting or bring them with you. The effect that this has is to counteract the obligatory objections by showing that you’ve considered and dismissed them as solutions. Obligatory, as opposed to serious or well-reasoned, objections tend to be off the cuff and thus likely to be the same thing that you’ve considered and dismissed, so having an immediate, well-defended answer serves not only to move the review along more quickly to relevant matters, but also to discourage status-seeking reviewers from testing these waters in the future. After all, making obligatory suggestions for modification only works to reinforce status if the attendees of the meeting perceive the suggester as right. Getting burned by a few volleys knocked quickly and decisively back into his court will make the status-seeker lose interest in this as an opportunity for grandstanding.

This practice has other indirect benefits as well. Getting in the habit of always justifying one’s decisions tends to make him better at what he does. If you’re mentally justifying every class and method that you write, I would argue that you’re a lot more likely to have a good design. And, your preparation makes others more effective as well. Thoughtful reviewers will be able to focus during the allotted time on things you hadn’t considered already and make the meeting more productive, and even the status-seekers will at least learn to come more prepared and probe deeper, if only for the purpose of not being shown up (in their minds).

If you’re running a team or a department and have the authority to dictate review policy, I would suggest a few different ideas. First and foremost, I would suggest offering a forum where more junior team members can present without any feedback or criticism at all. This need not be on a production piece of software – but given them some forum to present and be proud of their work without constantly needing to defend it against criticism. People feel pride in their work, for the most part, and getting a chance to showcase that work and feel good about it is important for team morale.

Another strategy would be to have a pool of reviewers and to let the team members choose who conducts the reviews. Keeping track of who they choose provides valuable feedback. I’d wager dollars to donuts that the status-seeker reviewers are rarely, if ever chosen, with developers selecting instead those from whom they can learn and who will make their work better. Of course, they might choose a reviewer who rubber stamps everything, but even that is valuable information to have, as you can talk to that reviewer or else choose a different reviewer.

A third option would be to create some sort of appeals process involving proof of concepts or other forms of ‘proof’ of ideas. If a developer submits a solution using a design pattern, and the reviewer happens not to like that pattern, don’t let that be the end of the discussion. The developer should be given a chance to create a demo or POC to show, concretely, the benefits of using the pattern, or perhaps a chance to cite sources or examples. This serves to create more of a meritocracy than a dictatorship when it comes to ideas.

Is It Worth the Bother?

In the end, one might make a good case that the new developers are wrong frequently enough that stunting their enthusiasm might be good for them, particularly if they’re cocksure about everything. A bite of humble pie is good for everyone from time to time. However, with that line of argument, a fine line emerges between appropriate, periodic humbling and the systematic stifling of new ideas and jockeying for influence. The more that obligatory criticisms and decisions by fiat take hold, the more they are reinforced, to the detriment of the department. I think that it’s of vital important to eliminate contributors to this type of environment to have an efficient team staffed with self-reliant and capable developers, whatever their experience levels may be.

(Clapper image was lifted from this blog post. I chose it as the clapper image source above all the other ones that pop on a google search specifically because I’m very interested in the subject of home automation)