What To Avoid When Doing Code Reviews
Editorial Note: I originally wrote this post for the SmartBear blog. You can see the original here, at their site. Go on over and check out their site!
Years ago, I had a senior software developer role in a shop where code review was part of the standard workflow. The way the review process worked was that anyone writing code had to submit the code for review to one of the senior developers of their choosing.
Over the course of time, I began to notice something interesting and a bit flattering: I was picked a lot to do reviews. When I first got an inkling of this trend, I simply thought I was flattering myself, but then I started to keep track and I realized that there was a definite trend. What was going on?
Was I an “easy grader” or a pushover? Was it just by chance? Was it that people thought I was some kind of legendary, super-developer? Turns out it was none of these things.
In search of the answer, I started to pay more attention to the nature of code reviews offered by other senior developers. In doing this, I came to realize that the answer lay not as much in what I was doing, but in what they were doing. Specifically, they were doing things that I wasn’t doing, and those actions were causing others to seek out my reviews.
This phenomenon was revealing to me, so I made a point to pay attention to the actions of different reviewers among the senior developers, and line them up with how much people sought out or avoided them for code review. Making these observations taught me valuable lessons about what to avoid when doing code reviews.
You may want to protest that a paradigm where reviewees choose their reviewers is far from universal, and that’s true. There are places, for instance, where an architect reviews everyone’s code no matter what, creating a captive audience, so to speak. But the lessons here aren’t about persuading people to choose you as a reviewer — they’re about being an effective reviewer. Code reviews will have significantly more positive impact when the reviewee engages in the review rather than simply enduring it.
Solving All The Things!
Have you ever started perusing a change set, and had a stream of thought similar to the following?
“Okay, let’s take a look at this class he wants me to review. Okay, first note: that name is terrible. Classes should be nouns. Next up, I don’t like the spacing in that constructor, and why is there a comment there? And I’m not a fan of the order of initialization of those variables. And speaking of which, I’d name those three variables different things. And why does this class have so many fields anyway? Alright, moving on — wait, why is that field mutable? Let’s go back. If it’s going to be mutable, it should have a different name.”
Now, as you’re thinking all of those things, are you typing them into whatever vehicle you’re going to be using to provide feedback? I get it. Really, I do. But imagine being on the other end of that. After a point, the feedback ceases to be helpful and simply becomes demoralizing. Looking at that wall of criticism, a developer is going to think, “I’m obviously not cut out for this,” however well-intended you might have been.
Write down all of your critiques — but before you send them over, do some grooming. Some of them will be similar enough to compile into broader points, and others probably aren’t that big of a deal. Collate them, force rank them, and send over the most important three or four. This is a small enough number that the reviewee will fix them with alacrity and, more importantly, remember them and not make the same mistakes again.
You’re never going to beat someone into writing the exact code that you would have, and it’s counterproductive to try. (Honestly, you should have just written the code yourself in the first place if that’s your attitude.)
Large Batches
Throughout my career, there was nothing worse than a monstrous code review in some waterfall shop. It didn’t matter if I was reviewing or being reviewed — getting together in a conference room for days to go over a months’ worth of code was just awful. But I could have lived with it if it were beneficial in spite of being awful. Trouble is, it wasn’t really that beneficial.
When reviews happen infrequently and cover lots of ground, many things go wrong:
- The reviewee probably doesn’t even remember writing half of the code and can’t speak to what she was thinking when she did.
- There’s so much ground to cover that it’s hard to do so non-superficially.
- The longer that goes by between code authorship and code review, the more time the author has to become attached to the code and defensive.
- The reviews become more likely to be rushed, ceremonial, and expendable.
Don’t fall into this trap, even if you have painful ALM tooling that makes it easy to do. Fight it. Review early and often.
Unprofessional Approach
As a consultant specializing in code analysis, I’m often called in to take a look at legacy code bases by managers and executives who look at me in nervous anticipation, saying, “So, how bad is it?” Sometimes, the people responsible for the code are present, and I’m forced to translate what I’m thinking (“this code makes me want to weep for humanity”) into a palatable message for all parties (“there are some definite opportunities for improvement”).
And while your situation of reviewing a peer’s code may be politically easier than mine, one thing is true in both situations: you gain nothing by being cruel. Calm down and deliver the message professionally.
Linus Torvalds doesn’t agree. But, even if I accepted his premise of being too skilled/important to be civil (which I don’t), he built Linux. You probably didn’t, so I wouldn’t cite him in defense of acting like Gregory House. Being mean or condescending isn’t good for the reviewee and it isn’t good for the team. It’s just a luxury item for you, if you’re into that sort of thing.
Code Review Should Be Valuable
If you’re tasked with performing code reviews, there’s no shortage of literature available to you on how to conduct them. You won’t find as much on what not to do. And yet, understanding what to avoid in code review can mean the difference between your review being a valuable part of your team’s delivery and your review simply being cruel and unusual punishment.
That’s interesting.
In the company I work for, we do code reviews with the code author sitting next to the reviewer, so we actually review the code together ; so there is no opportunity for grooming and organizing the critiques. On the other hand, it has two benefits:
– the author can immediately explain the parts of the code that are not clear to the reviewer
– the author is force to take another look at his own code again, and very often spots problems that the reviewer missed
Take a look at my comment, there’s much more value in not doing a “live” review.
Do you do a lot of pair programming? If there are routine, side-by-side code reviews like that, it seems like the line between that and pairing might blur some.
No, we don’t do pair programming on a regular basis. I mean, we often
work together for a few minutes to tackle a specific problem, but we
don’t code whole features in pairs.
The best method i’ve ever found for doing code reviews is in person. It could be done over teleconference for remote workers. The first step is the code review is never LIVE reviewed. This is only relevant for hallway usability tests and quick sanity checks. You select those who will be involved in the code review, you stick a time on the calendar of when the review will be. Likely 4-16 business hours into the future. Each participant in the review does their review independently and physically documents their notes. At the in person review all members bring their review… Read more »
Sounds interesting, but for me at least also quite expensive in man hours.
Do you have any criteria for when a review should be a team effort and not just one-on-one?
Every single time before accepting changes into trunk.
That is a bit harsh statement.
Similarly that might also be a sign of problems if you need to code review ‘all’ the code changes going to production.
If you’re under SOX, it’s actually mandatory by law. If you’re not under SOX it’s still the best practice.
It’s always some innocuous change that costs you real money.
Personally, I think this is the sort of thing that seems more expensive than it is. If such a review saves one nasty, downstream defect, it probably pays for itself and then some (i.e. someone doesn’t spend a man-day or two debugging six months down the line, and the org doesn’t suffer the reputation hit of the defect)
I definitely like the prep ahead of time concept (much like distributing a document/presentation for review before a meeting). It’s a pretty awful session when most of it is spent with various parties reading instead of collaborating.
I also like the idea of prioritizing based on frequency of findings by others. That is a good way to separate individual preferences from the general group standards that are either define or emerge. And certainly good for getting rid of turf wars and silo-ing.
This is definitely the best part of it. It really helps highlight what is good enough vs truly needs improvement. We’re developers we can always improve a different developer’s code.