DaedTech

Stories about Software

By

Making The Bad Impossible

Don’t Generate Compiler Warnings

I was doing some work this morning when I received a general email to a group of developers for a large project that sort of pleaded with and sort of demanded that members of the group not check in code that generated compiler warnings. And, I agree with that. Although, with the way that particular project is structured, this is easier said than done in some cases. There are dozens and dozens of assemblies in it, not all of which are re-compiled with a build, so someone may generate a warning, not notice it, and then not see it in subsequent runs if that assembly isn’t recompiled. Doing a rebuild instead of a build as habit is impractical here.

Something bothered me about this request and I couldn’t put my finger on it. I mean, the request is reasonable and correct. But, the email seemed sort of inappropriate in the way that talking about one’s salary or personal politics seems sort of inappropriate. It made me uncomfortable. The “why” of this kicked in after a few minutes. The email made me uncomfortable because it should have no need to exist. No email like that should ever be sent.

Don’t Do X

Some time back, I blogged about tribal knowledge, and why I view it as a bad thing. Tribal knowledge is a series of things that accrue over time in a project or group that are not manifest and can only be learned and memorized by experienced team members:

You ask for a code review, and a department hero takes one look at your code and freaks out at you. “You can never call MakeABar before MakeABaz!!! If you do that, the application will crash, the computer will probably turn off, and you might just flood the Eastern Seaboard!”

Dully alarmed, you make a note of this and underline it, vowing never to create Bars before Bazes. You’ve been humbled, and you don’t know why. Thank goodness the Keeper of The Tribal Knowledge was there to prevent a disaster. Maybe someday you’ll be the Keeper of The Tribal Knowledge.

We’ve all experienced things like this when it comes to code. Sometimes, they’re particulars of the application in question, such as in the tribal knowledge examples. Sometimes, they’re matters of organizational or other aesthetics, such as alphabetizing variable declarations, or maybe some other scheme. Sometimes, they’re coding conventions or standards in general. But, they’re always things that a person or group decide upon and then enforce by asking, telling, demanding, cajoling, threatening, begging, shaming, etc.

As a project or collaborative effort in general proceeds, the number of these items tends to increase, and the attendant maintenance effort increases along with it. One rule is simple and easy to remember. When the number gets up to five, people sometimes start to forget. As it balloons from there, team members become increasingly incapable of using a mental checklist and the checklist finds its way to a spreadsheet, or, perhaps a wall:

When this happens, the time spent conforming to it starts to add up. There is the development time, and then the post-development checklist validation time. Now, theoretically, a few times through the checklist and developers start tending to write code that conforms to it, so the lost time is, perhaps, somewhat mitigated, but the check itself is still more or less required. If someone gets very good at compliance, but makes one typo or careless error, a public shaming will occur and re-up the amount of time being spent on The List.

And, The List probably grows and changes over time, so no one is ever really comfortable with it. The List isn’t just tribal knowledge – it’s “meta-tribal knowledge”.

Stop the Madness

Is The List really necessary? People who enjoy dealing with the DMV or other bureaucracies may think so, and most developers may resign themselves to this sort of thing as an occupational hazard, but why are we doing this? Why do we have to remember to do a whole bunch of things and not do a whole bunch of other things. We’re programmers – can’t we automate this?!?

I believe that the answer to that question is “absolutely, and you should.” To circle back to my introductory example, rather than send out emails about checking for compiler warnings, why not set the build to treat warnings as errors? Then, nobody has to remember anything – generating a compiler warning will impede immediate progress and thus immediately be rectified. No emails, no post-it, and no entry in The List.

Alphabetized variables? Write a script on the build machine that parses your checked in files. Casing conventions? Write another parsing script or use something like DevExpress to automate it. Dependency management? Organize code into assemblies/libraries that make it impossible for developers to include the wrong namespaces or use the wrong classes.

And that’s the meat of the issue. Don’t tell people not to do bad things – make it impossible via immediate feedback. If I’m expected not to generate compiler warnings, make it so that I get the feedback of not being able to run my tests or application until I clean up my mess. That sort of scheme retains the learning and feedback paradigm, but without being a drain on productivity or the need for others to be involved. Constructor injection is based on this principle, when you really think about it. Using that injection scheme, I don’t need to wonder what I have to do before instantiating a Foo – the compiler tells me by not building if I don’t give Foo what it wants. I don’t have to go ask anyone. I don’t have to find out at a code review. I can just see it for myself and learn on my own.

Obviously, it’s not possible to make all bad things impossible. If you can do that, you probably have better things to do than code, like running the world. But, I would advocate that we go after the low-hanging fruit. This is really just an incarnation of the fail early concept, but that doesn’t just describe code behavior at runtime. It also can describe the feedback loop for development standards and preferences. Static analysis tools, auto-checkers, rules engines, etc are our friends. They automate code checking so that it need not be done by asking, telling, demanding, cajoling, threatening, begging, shaming, etc.

By

Offering Constructive Criticism without Breaking the Creative Spirit

Wet Behind the Ears

As a child in secondary school, you are instructed to do work and have very little choice in the whys and hows of the matter. This tends to be true in college education as well, although as one progresses, some choices are left to the student. Generally, these are vetted heavily by professors or teaching assistants. This trend generally continues right on into the business world, perhaps even with a bit of regression. New, entry-level hires are usually given pretty specific instructions and trusted very little – all work is checked heavily in the majority of cases.

This comes to be commonplace enough over the course of a young life that you don’t think much of it. I know that when I came into the work force, I took it as a matter of course that this would happen and that, in the majority of cases, the people vetting my work and giving me instructions knew what they were doing and I didn’t. I was right about this.

This isn’t to say I didn’t have the childhood tendency to be constantly convinced that I was right about things. The cliche of your parents knowing nothing when you’re a high school kid and getting a lot smarter somehow over the next 10 year was true for me. But, this was really only the case in my personal life – in business and academia, I took it as a matter of course that those with more experience than me knew more than me and that I could learn from then. Again, I was right about this.

The Bulb Comes On A Little

After a while, an interesting thing started to happen. I gained enough experience and confidence after a bit of time in the workforce to actually form my own opinions. Instead of always taking it as a matter of course that reviewers and managers were right about everything, I’d start to think to myself, “I’m not sure that’s the best way to do it.” But, I’d do it anyway, and often this would result in me learning how wrong I had been. But, every now and then, it would result in my suspicions being corroborated, and I discovered that being the junior team member, person whose code was being reviewed, employee, etc, didn’t mean, ipso facto, that I was wrong and they were right.

I tend to learn from my peers very quickly and soak up knowledge from them like a sponge. Perhaps this is stubbornness and I don’t like being wrong, so I burn the midnight oil to fix my weaknesses. Perhaps it’s just that I have a good work ethic and lots of natural curiosity. Perhaps I’m a perfectionist. I don’t really know, and I don’t think it much matters. End result of this is that the times when I was right but told to do the wrong thing grew more frequent over the course of time, and that became frustrating to me. I wanted the benefit of the input I was getting, when that input was right, but not to be saddled with it when it was wrong.

The Wrong Approach

A cynical trick that I learned pretty quickly was that meetings to evaluate proposals or review work were generally of a fixed length duration. If I brought in a beautiful, well thought out proposal, some of my precious items would be critiqued, and I’d have to compromise on them. But, if I brought in the same proposal with a few obvious mistakes, the duration of the meeting would be occupied by suggestions to ‘fix’ my red herrings, and my proposal/design/code would remain intact as I had originally envisioned it.

I’m ashamed to say that this worked exactly as I intended at the time. Meetings went exactly as I planned, and I proceeded without any serious review of my concepts. I had, effectively, swung from naive assumption that others were always right to an equally naive assumption that I needed no input from anyone else. In the end, I made some mistakes that I had to put in long hours fixing and realized later that it was entirely possible someone could have anticipated the blunder that I had missed. Now that I’m too old to know everything, I understand exactly how silly this was.

How Did It Come To This?

Reflecting back on my hubris years later, I understand something explicitly now that I understood only intuitively back then. What I’ve come to understand is that some people feel that offering feedback/criticism is obligatory in certain environments, whether or not feedback is necessary or helpful. That is, if someone were to present a light switch as a simple, effective solution for turning the overhead lights on and off, these sorts of people would suggest a push button switch or The Clapper, not because it was a better solution but simply because they wanted to put in their two cents.

I suspect that there are two main causes of this. The first is that some people very much enjoy expressing opinions, and all matters are subjects for debate. It’s a game of sorts. I have friends that have this attitude, and it can be a great one for driving debates and spurring innovation. It can also be tiresome at times, but it keeps you thinking and on your toes. The other cause I see is less innate and more a manifestation of collaborative environments – people perceive this as obligatory for subtle confirmation of status. Making corrections to the work of ‘inferiors’ reminds everyone present as to their place in the food chain.

Consider a meeting where a more senior peer or a manager reviews work of a less senior peer or subordinate. If the reviewer offers nothing except a “great job”, he may feel as if he’s acknowledging superiority and thus undermining his own role. The feedback thus becomes obligatory, an end rather than a mean, which creates a situation similar in concept to a scheme where police have quotas for traffic tickets given out — it’s assumed that something is wrong, and if nothing is wrong, something must be invented.

This is complicated somewhat by the fact that there is probably something that can be improved about any suggested process, piece of code, idea or anything else, just as it’s safe to assume that there’s always somebody speeding somewhere. As such, there is always a justification for suggestions for improvement or constructive criticism. But, the fact that just about anything could be improved does not necessarily imply that these obligatory suggestions amount to actual improvements. Often, they’re simply time wasters and can even diminish the idea. This is because if the criticism is obligatory, it may not be well thought out or justified – the criticizer may be grasping at straws to assert his authority.

I believe that there are some “tells” as to when this is occurring. If a 10 page document is submitted, and the reviewer offers 2 or 3 criticisms of the first page and then says the rest is fine, this is probably an obligatory criticism. What most likely happened is that the reviewer read until he or she found a few things to express opinions about and then stopped, since the mission of reinforcing authority was accomplished. Another such tell is criticism that misses the point or some obvious fact, for obvious reasons. And then, there is vague or ad hominem criticism — suggestions that the presenter is not up to the task or hasn’t been around long enough to understand things.

To go back to my narrative, I began to see this in action intuitively, pick up on the tells, and manipulate the situation to my advantage. Perfunctory criticism can be ferreted out by inserting a few false flag mistakes early on and allowing them to be corrected so that the idea can be discussed in earnest while placating those seeking reassurance.

So What?

Having read about my own story (and probably either empathizing, as a young developer or remembering back, as a more experienced one), it is worth asking if this is a problem, or if it is simply the way of the world. To that, my answer is “both”. The practice might have some nominal use in keeping people honest and humble, but it has a side effect that outweighs that benefit, in my opinion. It promotes a culture of “tenure over merit” and amplifies the “Dead Sea Effect”, wherein talented new developers tend to leave a company quickly and less enthusiastic and ambitious developers stay around and grandfather into roles of authority.

Alex Papadimoulis also blogged about this effect and said

The reason that skilled employees quit, however, is a bit more complicated. In virtually every job, there is a peak in the overall value (the ratio of productivity to cost) that an employee brings to his company. I call this the Value Apex.

On the first minute of the first day, an employee’s value is effectively zero. As that employee becomes acquainted with his new environment and begins to apply his skills and past experiences, his value quickly grows. This growth continues exponentially while the employee masters the business domain and shares his ideas with coworkers and management.

However, once an employee shares all of his external knowledge, learns all that there is to know about the business, and applies all of his past experiences, the growth stops. That employee, in that particular job, has become all that he can be. He has reached the value apex.

If that employee continues to work in the same job, his value will start to decline. What was once “fresh new ideas that we can’t implement today” become “the same old boring suggestions that we’re never going to do”. Prior solutions to similar problems are greeted with “yeah, we worked on that project, too” or simply dismissed as “that was five years ago, and we’ve all heard the story.” This leads towards a loss of self actualization which ends up chipping away at motivation.

Skilled developers understand this. Crossing the value apex often triggers an innate “probably time for me to move on” feeling and, after a while, leads towards inevitable resentment and an overall dislike of the job. Nothing – not even a team of on-site masseuses – can assuage this loss.

Bruce Webster talks about the role of a frustrating, apparently external, management structure in bringing this phenomenon about and Alex talks about the sense of no longer being able to offer value. Obligatory reviewers short circuit and amplify the Dead Sea Effect by making the frustrating management internal to the group and by setting the “value apex” low right from the start. Talented new hires are more likely to be quickly discouraged with obligatory criticism being directed toward them to reinforce status that reviewers have and they don’t.

What to Do?

I’ve spent a considerable number of words explaining a problem as I see it and an ill-advised ‘solution’ I came up with years back and subsequently discarded, so I’ll offer some thoughts on solutions to this problem. For the shorter tenured, presenter/review-ee worker, there is no entirely self-deterministic solution, but you can improve your odds of success and minimize frustration. The most important thing to do, in my opinion, and the solution that I’ve ultimately adopted is to be extremely well organized and prepared. As you work on your proposal/design/code/etc, document not only what you do, but what other approaches you considered and discarded. If you’re writing actual code, make a list of each class and method and write down its purpose for existence. Organize and format these notes and email them out ahead of the meeting or bring them with you. The effect that this has is to counteract the obligatory objections by showing that you’ve considered and dismissed them as solutions. Obligatory, as opposed to serious or well-reasoned, objections tend to be off the cuff and thus likely to be the same thing that you’ve considered and dismissed, so having an immediate, well-defended answer serves not only to move the review along more quickly to relevant matters, but also to discourage status-seeking reviewers from testing these waters in the future. After all, making obligatory suggestions for modification only works to reinforce status if the attendees of the meeting perceive the suggester as right. Getting burned by a few volleys knocked quickly and decisively back into his court will make the status-seeker lose interest in this as an opportunity for grandstanding.

This practice has other indirect benefits as well. Getting in the habit of always justifying one’s decisions tends to make him better at what he does. If you’re mentally justifying every class and method that you write, I would argue that you’re a lot more likely to have a good design. And, your preparation makes others more effective as well. Thoughtful reviewers will be able to focus during the allotted time on things you hadn’t considered already and make the meeting more productive, and even the status-seekers will at least learn to come more prepared and probe deeper, if only for the purpose of not being shown up (in their minds).

If you’re running a team or a department and have the authority to dictate review policy, I would suggest a few different ideas. First and foremost, I would suggest offering a forum where more junior team members can present without any feedback or criticism at all. This need not be on a production piece of software – but given them some forum to present and be proud of their work without constantly needing to defend it against criticism. People feel pride in their work, for the most part, and getting a chance to showcase that work and feel good about it is important for team morale.

Another strategy would be to have a pool of reviewers and to let the team members choose who conducts the reviews. Keeping track of who they choose provides valuable feedback. I’d wager dollars to donuts that the status-seeker reviewers are rarely, if ever chosen, with developers selecting instead those from whom they can learn and who will make their work better. Of course, they might choose a reviewer who rubber stamps everything, but even that is valuable information to have, as you can talk to that reviewer or else choose a different reviewer.

A third option would be to create some sort of appeals process involving proof of concepts or other forms of ‘proof’ of ideas. If a developer submits a solution using a design pattern, and the reviewer happens not to like that pattern, don’t let that be the end of the discussion. The developer should be given a chance to create a demo or POC to show, concretely, the benefits of using the pattern, or perhaps a chance to cite sources or examples. This serves to create more of a meritocracy than a dictatorship when it comes to ideas.

Is It Worth the Bother?

In the end, one might make a good case that the new developers are wrong frequently enough that stunting their enthusiasm might be good for them, particularly if they’re cocksure about everything. A bite of humble pie is good for everyone from time to time. However, with that line of argument, a fine line emerges between appropriate, periodic humbling and the systematic stifling of new ideas and jockeying for influence. The more that obligatory criticisms and decisions by fiat take hold, the more they are reinforced, to the detriment of the department. I think that it’s of vital important to eliminate contributors to this type of environment to have an efficient team staffed with self-reliant and capable developers, whatever their experience levels may be.

(Clapper image was lifted from this blog post. I chose it as the clapper image source above all the other ones that pop on a google search specifically because I’m very interested in the subject of home automation)

By

Coding Conventions and Ralph Waldo Emerson

Convention or Improvement

I was reading (and commenting on) a blog post by Jimmy Bogard at Los Techies yesterday. The basic premise was that, since Systems Hungarian Notation has basically been rejected by the developer community at large, it is interesting that we continue to pre-pend an “I” to interfaces in C#.

The post and some of the comments touched on the idea that while, yes, this is incongruous with eschewing systems Hungarian, we should continue to do it because it’s what other programmers expect and not following the convention makes your code less intuitive and readable to other C# developers. Jimmy says:

The train for picking different naming styles for interfaces and generic type parameter names left the station a long, long time ago. That ship has sailed. Picking a different naming convention that goes against years and years of prior art…

In the interest of full disclosure, I follow the naming convention and understand the point about readability, but it led me to consider a broader, more common theme that I’ve encountered in my programming career and life in general. At what point do we draw the line between doing something by convention because there is a benefit and mindlessly doing something by convention.

Emerson

Ralph Waldo Emerson famously said, “A foolish consistency is the hobgoblin of little minds.” This is often misquoted as “Consistency is the hobgoblin of little minds,” which, in my opinion, frames my question here well. The accurate quote that includes “foolish” seems to suggest that following convention makes sense, so long as one continually justifies doing so, whereas the latter misses the subtlety and seems to advocate iconoclasm for its own sake, which, one might argue, is a foolish consistency in and of itself. We all know people that mindlessly hate everything that’s popular.

Lest I venture too far into philosophical considerations and away from the technical theme of this blog, I’ll offer this up in terms of the original subject for debate. Is the fact that a lot of programmers do something a valid reason to do it yourself, or is that justification an all purpose cudgel for beating people into following the status quo for its own sake? Is the convention argument a valid one of its own right?

Specifics

This is not the first incarnation in which I’ve seen this argument about convention. I’ve been asked to name method variables “foo” instead of “myFoo” because that’s a convention in a code base, and other people don’t prepend their variables with “my”. Ostensibly, it makes the code more readable if everyone follows the same naming scheme. This is one example in the case of countless examples I can think of where I’ve been asked to conform to a naming scheme, but it’s an interesting one. I have a convention of my own where class fields are pre-pended with underscores, method parameters are camel case, and method variables are pre-pended with “my”.

This results in code that looks as follows:

internal class Customer
{
    private string _ssn;
    internal string Ssn { get { return _ssn; } }
    
    internal bool Equals(Customer customer)
    {
        var myOtherSocial = customer.Ssn;
        return _ssn == myOtherSocial;
    }
}

There is a method to my madness, vis-a-vis readability. When I look at code that I write, I can tell instantly whether a given variable is a class field, method parameter, or method variable. To me, this is clear and readable. What I was asked to do was rename “myOtherSocial” to “otherSocial” thus, in my frame of reference, losing the at-a-glance information as to what is a method parameter and what is a local variable. This promotes overall readability for the many while reducing readability for the few (i.e. me).

That’s an interesting tradeoff. One could easily make the Vulcan argument that better readability needs of the many outweigh the needs of the few. But, one could also make the argument that the many should adapt to my convention, since it provides more information. Taking at face value that I’m right about my way being better (only for argument’s sake – I’m not arguing that there is a ‘right’ in a matter of personal readability and aesthetics) is the idea of convention — that a lot of people do it the other way — a valid argument against improvement?

Knowing When Conventions Can be Ignored

I don’t presume to know what the best naming scheme for local variable or interfaces is. In fact, I’d argue that it’s likely a matter of aesthetics and expectations. People who have a harder time mentally switching contexts are going to be more likely to dig in, and mile-a-minute, flighty thinkers will be more likely to get annoyed at such ‘pettiness’ and ‘foolish consistency’. Who’s right? Who knows – who cares.

Where the rubber meets the road is how consistency or lack thereof affects a development team as a whole. And, therein lies the subtlety of what to do, in my mind. If I am tasked with developing some library with a small, public facing API for my part in a team, it makes the most sense for me to write code in a style that makes me most productive. The fact that my code might be less readable to others will only be relevant at code reviews and in the API itself. The former is a one and done concern and the latter can be addressed by establishing “public” conventions to which I would then adhere. Worrying about some hypothetical maintenance programmer when deciding what to name and how to case my field variables is fruitless to a degree because that hypothetical person’s readability preferences are strictly unknowable.

On the other hand, if there is a much more open, collaborative paradigm where people are constantly modifying one another’s code, then establishing conventions probably makes a lot of sense, and following the conventions for their own sake would as well. The conventions might even suck, but they’ll be consistent, and probably save time. That’s because the larger issue with the consistency/convention argument is facilitating efficiency in communication — not finding The One Try Way To Do Things.

So, the “arbitrary convention” argument picks up a lot of steam as collaboration becomes more frequent and granular. I think in these situations, the “foolish consistency” becomes an asset to practical developers rather than a “hobgoblin of little minds.”

Be an Adapter Instead of an Evangelizer

All that said, the conventions and arguments still annoy me on some level. I can recognize the holistic wisdom in the approach while still being aggravated by being asked to do something because a bunch of other people are also doing it. It evokes images in my head of being forced to read “Us Weekly” because a lot of other people care how many times Matt Damon went to the gym this week. Or, more on subject, it evokes images of being asked to prepend “str” to all my string variables because other people think that (redundant in Visual Studio) information makes things more readable.

But, sometimes these things are unavoidable. One can go with it, or one can figure out a way to make all parties happy. I blogged previously about my solution to my own brushes with others’ conventions. My solution was a DXCore plugin that translated my conventions to a group’s, and vice-versa. I’m happy, the group is happy, and I learned stuff in the process.

So, I think the most important thing is to be an adapter. Read lots of code, and less will throw you off your game and cause you to clamor for conventions. Figure out ways to satisfy all parties, even if it means work-arounds or interesting solutions. Those give your brain some exercise anyway. Be open to both the convention argument and the argument for a new way of doing things. Go with the flow. The evangelizers will always be there, red faced, arguing that camel case is stupid and only pascal case makes sense. That’s inevitable. It’s not inevitable that you be one them.

(Image compliments of: http://geopolicraticus.wordpress.com/2009/01/28/short-term-thinking/ )

By

Resumes and Alphabet Soup

I was reading a blog post by Scott Hanselman this morning. The main premise was a bemused lampooning of the ubiquitous tendency of techies to create a resume “alphabet soup” and a suggestion that Stack Overflow Careers is introducing a better alternative.

I was going to comment on this post inline, but I discovered that I had enough to say to create my own post about it. It seems as though this state of affairs for techies is, ironically, the product of living in a world created by techies. What I mean is that over the last several decades, we have made it our life’s goal to automate processes and improve efficiency. Those chickens have come home to roost in the form of automated resume searching, processing, and recommending.

Alphabet Soup Belongs in a Bowl

Having an item at the bottom of a resume that reads “XML, HTML, JSON, PHP, XSLT, HTTP, WPF, WCP, CSS… etc” is actually quite efficient toward its purpose in the same way that sticking a bunch of keywords in a web page’s hidden meta-data is efficient. It violates the spirit of the activity by virtue of the fact that it’s so good at gaming it as to be “too good”. As a job seeker, if I want to create the largest opportunity pool, it would stand to reason that I should include every three and four character combination of letters in existence somewhere in the text of my resume (“Strongly proficient in AAA, AAB, AAC, …ZZZZ”). And, while most of these ‘technologies’ don’t exist and I probably haven’t used most of the ones that do, this will cast a wider net for my search than not including this alphabet soup. I can always sort out the details later once my foot is in the door. Or, so the thinking probably goes (I’m not actually endorsing, in any way, resume exaggeration or the SPAMing the resume machine — just using first person to illustrate a point).

We, the techies of the world, have automated the process of matching employers with employees, and now, we are forced to live in a world where people attempt to game the system in order to get the edge. So, what’s the answer? A declaration that companies should stop hiring this way? Maybe, but that seems unlikely. A declaration that people should stop creating their resumes this way because it’s silly? That seems even more unlikely because the ‘silly’ thing is not doing something that works, and the alphabet soup SPAM works.

I think that this programmer-created problem ought to be solved with better programming. What we’ve got here is a simple text matching algorithm. As a hiring authority, I program my engine (or have the headhunters and sites program it for me) to do a procedure like “give me all the resumes that contain XML, HTML, CSS, and WPF”. I then get a stack of matching resumes from people whose experience in those technologies may range from “expert” to “I think I read about that once on some website” and it’s up to me to figure out which resume belongs to which experience level, generally via one or more interviews.

So, maybe the solution here is to create a search algorithm that does better work. If I were gathering requirements for such a thing, I would consider that I have two stakeholders: company and employee. These two stakeholders share a common goal, “finding a good match”, and also have some mildly conflicting goals — company wants lots of applicants and few other companies and employee wants lots of companies and few other prospective employees. It is also worth considering that the stakeholders may attempt to deceive one another in pursuit of the goal (resume padding on one end or misrepresenting the position on the other end).

With that (oversimplified) goal enumeration in mind, I see the following design goals:

  1. Accuracy for employers. The search engine returns candidates that are better than average at meeting needs.
  2. Accuracy for employees. Engine does not pair them with employers looking for something different than them, thus putting them in position to fail interviews and waste time.
  3. Easy to use, narrative inputs for employees. You type up a summary of your career, interests, experience, projects, etc, and that serves as input to the algorithm – you are not reduced to a series of acronyms.
  4. Easy to use, narrative inputs for employers. You type up a list of the job’s duties at present, anticipated duties in the future, career development path, success criteria, etc and that serves as input to the algorithm.
  5. Opacity/Double blind. Neither side understands transparently the results from its inputs. Putting the text “XML” on a resume has an indeterminate effect the likelihood of getting a job with an employer that wants employees with XML knowledge. This mitigates ‘gaming’ of the system (similar in concept to how search engines guard their algorithms)

Now, in between the narrative input of both sides, the magic happens and pairings are made. That is where we as the techies come in. This is an ambitious project and not an easy one, but I think it can and should happen. Prospective employees tell a story about their career and prospective employers tell a story about a hypothetical employee and matches are created. Nowhere in this does a dumb straight-match of acronym to acronym occur, though the system would take into account needs and skills (i.e a position primarily developing in C++ would yield candidates primarily with good C++ background).

Anyway, that’s just what occurred to me in considering the subject, and it’s clearly far too long for a blog comment. I’m spitballing this here, so comments and opinions are welcome. Also, if this already exists somewhere, please point me at it, as I’d be curious to see it.

(Alphabet soup photo is linked from this post which, by the way, I fully agree with.)

By

In Search of the Perfect Code Review

Code Reviews

One thing that I tend to contemplate from time to time and have yet to post about is the idea of code reviews and what constitutes a good one. I’ve worked on projects where there was no code review process, a half-hearted review process, an annoying or counter-productive code review process, and a well organized and efficient code review process. It’s really run the gamut. So, I’d like to put pen to paper (finger to keyboard) and flesh out my ideas for what an optimal code review would entail. Before I can do that, however, I think I need to define some terms up front, and then identify some things about code reviews that I view as pitfalls, counter-productive activities or non-useful activities.

What is a Code Review?

According to the wikipedia entry for a code review, there are three basic types: the formal review, pair programming, and the lightweight review. I’ll add a fourth to this for the sake of pure definition, and call that the “automated review”. The automated review would be use of one or more static analysis tools like FX Cop or Style Cop (the wiki article includes someone using tools like this on the code of another developer, but I’m talking strictly automated steps like “you fail the build if you generate Style Cop warnings”). Pair programming is self explanatory for anyone familiar with the concept. Lightweight reviews are more relaxed and informal in the sense that they tend to be asynchronous and probably more of a courtesy. An example of this is where you email someone a source code file and ask for a sanity check.

The formal review is the one with which people are probably most familiar if code review is officially part of the SDLC on the project. This is a review where the developer sits in a room with one or more other people and presents written code. The reviewers go through in detail, looking for potential bugs, mistakes, failures to comply with convention, opportunities for improvement, design issues, etc. In a nutshell, copy-editing of code while the author is present.

What I’ll be discussing here mainly pertains to the formal review.

What I Don’t Like

Here are some things that I think ought to be avoided in a code review process.

1. Failing to have code reviews

This is important in the same way that QC is important. We’re all human and we could all use fresh eyes and sanity checks.

2. Procedural Code Review: The Nitpick

“You made that camel case instead of Pascal case.” “You could be dereferencing null there.” In general, anything that a static analysis tool could tell someone a lot faster and more accurately than a room full of people inspecting code manually. Suresh addresses this idea in a blog post:

I happen to see only “superficial” reviews happening. By superficial, I mean the types where you get review comments like, “You know the documentation for that method doesn’t have the version number”, or “this variable is unused”, etc.

“Superficial” is an excellent description as these things are generally trivial to identify and correct. It is not an effective use of the time of the developer or reviewers anymore than turning off spell check and doing it manually is an effective use of authors’ and editors’ time. There are tools for this — use them!

3. Paranoid Code Review

This is what happens when reviewer(s) go into the review with the notion that the only thing standing between a functioning application and disaster is their keen eye for the mistakes of others. This leads to a drawn out activity in which reviewers try to identify any possible, conceivable mistake that might exist in the code. It’s often easily identified by reviewers scrunching their noses, concentrating heavily, pointing, or tracing code through control flow statements with their fingers on the big screen while the developer twiddles his thumbs.

Again, there’s a tool for this. It’s called the unit test. It’s not flawless and it assumes a decent amount of coverage and competence from the unit tests, but if executed properly, the unit tests will express and prove corner case behavior far better than people on too little sleep staring at a screen and trying to step through the call stack mentally. This mental execution is probably the least reliable possible way of examining code.

4. Pure Gatekeeper Code Review

This is less of an individual property of code review, but more a property of the review process. It’s where you have a person or committee in the department that acts as the Caesar of code, giving thumbs up or thumbs down to anything that anyone submits. Don’t get me wrong — the aim of this makes sense. You want somebody that’s doing a sanity check on the code and someone who more or less has his or her finger on the pulse of everything that’s happening.

The issue that occurs here is a subtle one that results from having the same person or people reviewing all code. Specifically, those people become the gatekeepers and the submitters become less concerned about writing good code and innovating and more principally concerned with figuring out how to please the reviewer(s). Now, if the reviewers are sharp and savvy, this may not be a big deal, though discouraging new ideas and personal growth is always going to have some negative impact. However, if the reviewers are not as sophisticated, this is downright problematic.

This is relatively easily addressed by rotating who performs code reviews or else distinguishing between “suggestion” and “order”. I’ve participated in review processes where both of these mitigating actions were applied, and it’s a help.

5. Tyrant Gatekeeper

In the previous example, I mentioned a hypothetical gatekeeper reviewer or committee with ultimate authority, and this is a subset of that. In this case, the reviewer(s) have ultimate yes/no authority and are very heavy (and possibly even combative or derisive) with the “no” option. Where the previous example might stifle innovation or developer growth, this creates bottlenecks. Not only is it hard to get things approved, but developers will naturally start asking the reviewer(s) what to do at every turn rather than attempting to think for themselves.

In essence, this creates a state of learned helplessness. Developers are more concerned with avoiding negative feedback at the code reviews than learning, doing a good job, or becoming able to make good decisions based on their own experience. As a result, the developers don’t really make any decisions and ask the reviewer(s) what to do at every step, waiting until they have time, if necessary. The review(s) become a bottleneck in the process.

I have not personally witnessed or been subject to this form of code reviews, but I have heard of such a thing and it isn’t difficult to imagine this happening.

6. Discussion Drift

This occurs during a code review when a discussion of the specific implementation gets sidetracked by a more general discussion of the way things ought to be. Perhaps the code is instantiating an object in the constructor, and a reviewer recommends using dependency injection instead. From here, the participants in the review being to discuss how nice it would be if the architecture relied on a IOC framework instead of whatever it is at the moment.

That’s both a valid and an interesting discussion, but it has nothing to do with some developer checking in implementation code within the framework of the existing architecture. Discussions can have merit and still not be germane to the task at hand.

7. Religious Wars

This occurs during a code review in the following context:

Reviewer 1: You didn’t put curly brackets around that one liner following your if statement. Please add them.
Reviewer 2: No, don’t do that. I hate that.
Reviewer 1: No, it’s better. That way if someone changes the code and adds another statement…. etc.
Reviewer 2: Studies have shown….
Review-ee: Uh, guys….

And so on and so forth. Code reviews can easily devolve into this sort of thing over purely or nearly-purely subjective matters. People get very entrenched about their own subjective preferences and feel the need to defend them. We see this from political affiliation to rooting for sports teams. Subjective matters of preference in code are no different. Neither side is likely to convince the other during the scope of the review, but what is quite likely, and probably certain, is that time will be wasted.

If matters like that are part of the coding standards policy on the project, than it’s an open and shut case. If they’re not, they’re better left alone.

So, How Does a Good Code Review Go?

Having explained what I don’t like in a code review, I’ve provided some context for what I do find helpful. I’m going to outline a procedure that is simply an idea. This idea is subject to suggestions for improvement by others, and ongoing refinement from me. Also, this idea is geared toward the gatekeeper scenario. I may make another post on what I perceive to be the most effective method for voluntary(ish), peer-conducted code reviews where the reviewer(s) are not approval gate-keepers.

  1. Developer finishes the pre-defined unit of code and is ready to have it reviewed for promote/commit.
  2. Developer runs static analysis tools (e.g. StyleCop, FXCop, Code Contracts, NDepend, etc) with configured rule-sets, correcting any errors they uncover.
  3. Once no static-check violations are present, developer notifies reviewer(s) of desire for code review.
  4. Reviewers run static analysis tools asynchronously and reject the request if any rules are violated.
  5. Reviewers examine the code for obvious mistakes not caught by static analysis and/or developer unit tests, and write unit tests that expose the deficiency. (Alternatively, they can run something like Pex)
  6. Developer makes reviewer unit tests pass or else convinces reviewer(s) why they don’t need to.
  7. With all unit tests passing, and all parties familiar with the code, a review meeting is setup (meeting can be skipped for smaller/less crucial code deliveries).
  8. Meeting proceeds as follows:
    1. A mediator who is neither developer nor reviewer is asked to attend to keep the meeting focused and on track.
    2. Reviewers point out something praiseworthy about the submitted code (cheesy, perhaps, but important for starting off in the spirit of cooperation)
    3. Reviewers examine code for redundancy (is anything copy/pasted, defined in many places, etc)
    4. Reviewers examine the code for usable API, perhaps by implementing classes in a sandbox, to highlight shortcomings, unintuitive interactions, weird couplings, etc
    5. Reviewers check for architectural consistency — does the class implement a base class that it should, duplicate the function of some other class in the suite, seem to be in the wrong location, etc.
    6. Reviewers perform a dependency analysis — what new dependencies does this introduce? Cyclical, global, temporal, etc. Are these dependencies acceptable?
    7. Reviewers analyze for code smells and anti-patterns.
    8. Reviewers compile a list of suggested changes for the developer.
    9. Meeting adjourned.
  9. Developer makes suggested changes that he agrees with.
  10. Developer appeals changes with which he doesn’t agree. This could involve the reviewer(s) providing “proof”, for example if they say that the developer shouldn’t do something because of X negative consequence, they should demonstrate that consequence somehow. This appeal could be resolved via proof/demonstration or it could go to a third party where the developers and reviewers each state their cases.
  11. Any suggested changes and the results of any appeals are then promoted/committed.

This process naturally addresses (1) and (2) of the things that I don’t like in that you’re having a code review and getting the procedural, easy stuff out of the way offline, prior to meeting. (3) is made more difficult by the fact that the reviewer(s) are given the opportunity to write unit tests that expose the badness about which they might be paranoid. (4) and (5) are addressed by the appeal process and the general concept that changes are suggestions rather than decrees. (6) and (7) are addressed by the mediator who has no skin in the game and will probably have a natural tendency to want to keep things short and sweet.

One drawback I can see to what I’m proposing here is that you could potentially undercut the authority of the reviewer if the person doing the reviews is, say, the most senior or high ranking person. Perhaps people would want that role a bit less if they could be officially second guessed by anyone. However, I think that creates a nice environment where good ideas are valued above all else. If I were in that role myself, I’d welcome the challenge of having to demonstrate/prove ideas that I think are good ones before telling others to adopt them. In the long run, being steered toward that kind of rigor makes you better at your craft. I also think that it might be something of a non-issue, given that people who wind up in these types of leadership and senior roles are often there based on the merit of their work and on successfully “proving” things over and over throughout the course of a career.