10 Ways to Improve Your Code Reviews
For a change of pace, I thought I’d channel a bit of Cosmo and offer a numbered article today. I’ve asked by others to do a lot of code reviews lately, and while doing this, I’ve made some notes as to what works, what doesn’t, and how I can improve. Here are those notes, enumerated and distilled into a blog post.
- Divide and distribute. Have one person look for duplicate code chunks, one look for anti-patterns, one check for presence of best practices, etc. It is far easier to look through code for one specific consideration than it is to read through each method or line of code, looking for anything that could conceivably be wrong. This also allows some people to focus on coarser granularity concerns (modules and libraries) with other focused on finer (classes and methods). Reading method by method sometimes obscures the bigger picture and concentrating on the bigger picture glosses over details.
- Don’t check for capitalization, naming conventions and other minutiae. I say this not because I don’t care about coding conventions (which I kind of don’t), but because this is a waste of time. Static analysis tools can do this. Your build can be configured not to accept checkins/deliveries that violate the rules. This is a perfect candidate for automation, so automate it. You wouldn’t waste time combing a document for spelling mistakes when you could turn on spell-check, and the same principle applies here.
- Offer positive feedback. If the code review process is one where a developer submits code and then defends it while others try to rip it to pieces, things become decidedly adversarial, potentially causing resentment. But, even more insidiously, unmitigated negativity will tend to foster learned helplessness and/or get people to avoid code reviews as much as possible.
- Pair. If you don’t do it, start doing it from time to time. If you do it, do it more. Pairing is the ultimate in code review. If developers spend more time pairing and catching each other’s mistakes early, code reviews after the fact become quicker and less involved.
- Ask why, don’t tell what. Let’s say that someone gets a reference parameter in a method and doesn’t check it for null before dereferencing it. Clearly, this is bad practice. But, instead of saying “put a null check there”, ask, “how come you decided not to check for null — is it safe to assume that you callers never pass you null?” Obviously, the answer to that is no. And, the thing is, almost any programmer will realize this at that point and probably say “oh, no, that was a mistake.” The key difference here is that the reviewee is figuring things out on his or her own, which is clearly preferable to being given rote instruction.
- Limit the time spent in a single code review. Given that this activity requires collaboration and sometimes passive attention, attention spans will tend to wane. This, in turn, produces diminishing marginal returns in terms of effectiveness of the review. This isn’t rocket science, but it’s important to keep in mind. Short, focused code reviews will produce effective results. Long code reviews will tend to result in glossing over material and spacing out, at which point you might as well adjourn and do something actually worthwhile.
- Have someone review the code simply for first-glance readability/understanding. There is valuable information that can be mined from the reaction of an experienced programmer to new code, sight unseen. Specifically, if the reaction to some piece of the code is “what the…”, that’s a good sign that there are readability/clarity issues. The “initial impression” litmus test is lost once everyone has studied the code, so having someone capture that at some point is helpful.
- Don’t guess and don’t assume — instead, prove. Rather than saying things like “I think this could be null here” or “This seems like a bad idea”, prove those things. Unit tests are great. If you see a flaw in someone’s code, expose it with a failing unit test and task them with making it pass. If you think there’s a performance problem or design issue, support your contention with a sample program, blog post, or whitepaper. Opinions are cheap, but support is priceless. This also has the added benefit of removing any feelings of being subject to someone else’s whims or misconceptions.
- Be prepared. If this is a larger, meeting-oriented code review, the people conducting the review should have read at least some of the code beforehand, and should be familiar with the general design (assuming that someone has already performed and recorded the results from suggestion 7). This allows the meeting to run more efficiently and the feedback to be more meaningful than a situation where everyone is reading code for the first time. When this happens, things will get missed since people start to feel uncomfortable as everyone waits for them to understand.
- Be polite and respectful. You would think that this goes without saying, but sadly, that seems not to be the case. In my career, I have encountered many upbeat, intelligent and helpful people, but I’ve also encountered plenty of people who seem to react to everything with scorn, derision, or anger. If you know more than other people, help them. If they’re making basic mistakes, help them understand. If they’re making the same mistakes multiple times, help them find a way to remember. Sighing audibly, rolling your eyes, belittling people, etc, are not helpful. It’s bad for them, and it’s bad for you. So please, don’t do that.
Feel free to chime in with additional tips, agreements, disagreements, etc.
Great tips. I definitely agree with pairing. We don’t do many formal code reviews at my job, but we pair program. And we’ve been doing more and more group refactoring/brainstorming sessions where we’ll look at the code and solve the problem as a group. I think that is helping a lot, but we need to work in more reviews, especially in higher risk areas of the code.
Hi Dan,
Thanks for reading, and I’m glad you enjoyed the tips. Group refactorings are something I’ve been thinking about doing as well. I like the idea of performing a collaborative code review on some piece of code that no one is responsible for (and thus defensive of) to sort of ruthlessly improve legacy code in a cooperative fashion. I’m glad to hear that’s working for someone else, as that may motivate me to implement it.
Yeah the group refactorings are working out well. We initially started putting some time aside on Fridays to do code katas. After we worked on a couple refactoring katas, we started picking bits of legacy code to clean up. Like you said, no one gets defensive so everyone gets involved and we can all laugh at and fix the awful code.
It’s funny – I lead a Friday session as well, and we sometimes do tentative or example refactorings. Looks like we have a lot of overlap in our MOs. 🙂