Editorial note: I originally wrote this post for the SubMain blog. You can check out the original here, at their site. You can also read a lot more, both on that blog and in their documentation, about CodeIt.Right’s analysis rules.
Today, I’ll do another installment of the CodeIt.Right Rules, Explained series. This is post number five in the series. And, as always, I’ll start off by citing my two personal rules about static analysis guidance, along with the explanation for them.
Never implement a suggested fix without knowing what makes it a fix.
Never ignore a suggested fix without understanding what makes it a fix.
It may seem as though I’m playing rhetorical games here. After all, I could simply say, “learn the reasoning behind all suggested fixes.” But I want to underscore the decision you face when confronted with static analysis feedback. In all cases, you must actively choose to ignore the feedback or address it. And for both options, you need to understand the logic behind the suggestion.
In that spirit, I’m going to offer up explanations for three more CodeIt.Right rules today.
Editorial note: I originally wrote this post for the SubMain blog. You can check out the original here, at their site. While you’re there, have a look at the automated code review tool, CodeIt.Right.
Many of us have a natural tendency to let little things pile up. This gives rise to the notion of the so-called spring cleaning. The weather turns warm and going outside becomes reasonable, so we take the opportunity to do some kind of deep cleaning.
Of course, this may not apply to you. Perhaps you keep your house impeccable at all times, or maybe you simply have a cleaning service. But I’ll bet that, in some part of your life or another, you put little things off until they become bigger things. Your cruft may not involve dusty shelves and pockets of house clutter, but it probably exists somewhere.
Maybe it exists in your professional life in some capacity. Perhaps you have a string of half written blog posts or your inbox has more than a thousand messages. And, if you examine things honestly, you almost certainly have some item that has been skulking around your to-do list for months. Somewhere, we all have items that could use some tidying, cognitive or physical.
With that in mind, I’d like to talk about your code review process. Have you been executing it like clockwork for months or years? Perhaps it has become too much like clockwork. Turn a critical eye to it, and you might realize elements of it have become stale or superfluous. So let’s take a look at how you can apply a spring cleaning to your code review process.
Editorial note: I originally wrote this post for the SubMain blog. You can check out the original here, at their site. While you’re there, have a look at CodeIt.Right.
I can almost sense the indignation from some of you. You read the title and then began to seethe a little. Then you clicked the link to see what kind sophistry awaited you. “There is no substitute for peer review.”
Relax. I agree with you. In fact, I think that any robust review process should include a healthy amount of human and automated review. And, of course, you also need your test pyramid, integration and deployment strategies, and the whole nine yards. Having a truly mature software shop takes a great deal of work and involves standing on the shoulders of giants. So, please, give me a little latitude with the premise of the post.
Today I want to talk about how one could replace manual code review with automated code review only, should the need arise.
Why Would The Need for This Arise?
You might struggle to imagine why this would ever prove necessary. Those of you with many years logged in the enterprise in particular probably find this puzzling. But you might find manual code inspection axed from your process for any number of reasons other than, “we’ve decided we don’t value the activity.”
First and most egregiously, a team’s manager might come along with an eye toward cost savings. “I need you to spend less time reading code and more time writing it!” In that case, you’ll need to move away from the practice, and going toward automation beats abandoning it altogether. Of course, if that happens, I also recommend dusting off your resume. In the first place, you have a penny-wise, pound-foolish manager. And, secondly, management shouldn’t micromanage you at this level. Figuring out how to deliver good software should be your responsibility.
But let’s consider less unfortunate situations. Perhaps you currently work on a team of 2, and number 2 just handed in her two week’s notice. Even if your organization back-fills your erstwhile teammate, you have some time before the newbie can meaningfully review your code. Or, perhaps you work for a larger team, but everyone gradually becomes so busy and fragmented in responsibility as not to have the time for much manual peer review.
In my travels, this last case actually happens pretty frequently. And then you have to chose: abandon the practice altogether, or move toward an automated version. Pretty easy choice, if you ask me.
Editorial Note: I originally wrote this post for the SubMain blog. You can check out the original here, at their site. While you’re there, have a look at CodeIt.Right to help you perform automated code reviews.
Today, I’ll do another installment of the CodeIt.Right Rules, Explained series. I have now made four such posts in this series. And, as always, I’ll start off by citing my two personal rules about static analysis guidance.
Never implement a suggested fix without knowing what makes it a fix.
Never ignore a suggested fix without understanding what makes it a fix.
It may seem as though I’m playing rhetorical games here. After all, I could simply say, “learn the reasoning behind all suggested fixes.” But I want to underscore the decision you face when confronted with static analysis feedback. In all cases, you must actively choose to ignore the feedback or address it. And for both options, you need to understand the logic behind the suggestion.
In that spirit, I’m going to offer up explanations for three more CodeIt.Right rules today.
Editorial Note: I originally wrote this post for the SubMain blog. You can check out the original here, at their site. While you’re there, take a look around at some of the other posts and at their offerings.
Today, I’d like to offer a somewhat lighthearted treatment to a serious topic. I generally find that this tends to offer catharsis to the frustrated. And the topic of code review tends to lead to lots of frustration.
When talking about code review, I always make sure to offer a specific distinction. We can divide code reviews into two mutually exclusive buckets: automated and manual. At first, this distinction might sound strange. Most reading probably think of code reviews as activities with exclusively human actors. But I tend to disagree. Any static analyzer (including the compiler) offers feedback. And some tools, like CodeIt.Right, specifically regard their suggestions and automated fixes as an automation of the code review process.
I would argue that automated code review should definitely factor into your code review strategy. It takes the simple things out of the equation and lets the humans involved focus on more complex, nuanced topics. That said, I want to ignore the idea of automated review for the rest of the post. Instead, I’ll talk exclusively about manual code reviews and, more specifically, where they tend to get ugly.
You should absolutely do manual code reviews. Full stop. But you should also know that they can easily go wrong and devolved into useless or even toxic activities. To make them effective, you need to exercise vigilance with them. And, toward that end, I’ll talk about some manual code review anti-patterns.
I am Erik Dietrich, founder of DaedTech. I’m a former programmer, architect, and IT management consultant, and current founder and CEO of Hit Subscribe.