DaedTech

Stories about Software

By

Transitioning from Manual to Automated Code Review

Editorial note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at CodeIt.Right.

I can almost sense the indignation from some of you.  You read the title and then began to seethe a little.  Then you clicked the link to see what kind sophistry awaited you.  “There is no substitute for peer review.”

Relax.  I agree with you.  In fact, I think that any robust review process should include a healthy amount of human and automated review.  And, of course, you also need your test pyramid, integration and deployment strategies, and the whole nine yards.  Having a truly mature software shop takes a great deal of work and involves standing on the shoulders of giants.  So, please, give me a little latitude with the premise of the post.

Today I want to talk about how one could replace manual code review with automated code review only, should the need arise.

Why Would The Need for This Arise?

You might struggle to imagine why this would ever prove necessary.  Those of you with many years logged in the enterprise in particular probably find this puzzling.  But you might find manual code inspection axed from your process for any number of reasons other than, “we’ve decided we don’t value the activity.”

First and most egregiously, a team’s manager might come along with an eye toward cost savings.  “I need you to spend less time reading code and more time writing it!”  In that case, you’ll need to move away from the practice, and going toward automation beats abandoning it altogether.  Of course, if that happens, I also recommend dusting off your resume.  In the first place, you have a penny-wise, pound-foolish manager.  And, secondly, management shouldn’t micromanage you at this level.  Figuring out how to deliver good software should be your responsibility.

But let’s consider less unfortunate situations.  Perhaps you currently work on a team of 2, and number 2 just handed in her two week’s notice.  Even if your organization back-fills your erstwhile teammate, you have some time before the newbie can meaningfully review your code.  Or, perhaps you work for a larger team, but everyone gradually becomes so busy and fragmented in responsibility as not to have the time for much manual peer review.

In my travels, this last case actually happens pretty frequently.  And then you have to chose: abandon the practice altogether, or move toward an automated version.  Pretty easy choice, if you ask me.

Read More

By

CodeIt.Right Rules Explained, Part 4

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at CodeIt.Right to help you perform automated code reviews.  

Today, I’ll do another installment of the CodeIt.Right Rules, Explained series.  I have now made four such posts in this series.  And, as always, I’ll start off by citing my two personal rules about static analysis guidance.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

It may seem as though I’m playing rhetorical games here.  After all, I could simply say, “learn the reasoning behind all suggested fixes.”  But I want to underscore the decision you face when confronted with static analysis feedback.  In all cases, you must actively choose to ignore the feedback or address it.  And for both options, you need to understand the logic behind the suggestion.

In that spirit, I’m going to offer up explanations for three more CodeIt.Right rules today.

Read More

By

Manual Code Review Anti-Patterns

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site. While you’re there, take a look around at some of the other posts and at their offerings.

Today, I’d like to offer a somewhat lighthearted treatment to a serious topic.  I generally find that this tends to offer catharsis to the frustrated.  And the topic of code review tends to lead to lots of frustration.

When talking about code review, I always make sure to offer a specific distinction.  We can divide code reviews into two mutually exclusive buckets: automated and manual.  At first, this distinction might sound strange.  Most reading probably think of code reviews as activities with exclusively human actors.  But I tend to disagree.  Any static analyzer (including the compiler) offers feedback.  And some tools, like CodeIt.Right, specifically regard their suggestions and automated fixes as an automation of the code review process.

I would argue that automated code review should definitely factor into your code review strategy.  It takes the simple things out of the equation and lets the humans involved focus on more complex, nuanced topics.  That said, I want to ignore the idea of automated review for the rest of the post.  Instead, I’ll talk exclusively about manual code reviews and, more specifically, where they tend to get ugly.

You should absolutely do manual code reviews.  Full stop.  But you should also know that they can easily go wrong and devolved into useless or even toxic activities.  To make them effective, you need to exercise vigilance with them.  And, toward that end, I’ll talk about some manual code review anti-patterns.

Read More

By

Static Analysis to Hide My Ignorance about Global Concerns

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, take a look at CodeIt.Right to help you automate elements of your code reviews.

“You never concatenate strings.  Instead, always use a StringBuilder.”

I feel pretty confident that any C# developer that has ever worked in a group has heard this admonition at least once.  This represents one of those bits of developer wisdom that the world expects you to just memorize.  Over the course of your career, these add up.  And once they do, grizzled veterans engage in a sort of comparative jousting for rank.  The internet encourages them and eggs them on.

“How can you call yourself a senior C# developer and not know how to serialize objects to XML?!”

With two evenly matched veterans swinging language swords at one another, this volley may continue for a while.  Eventually, though, one falters and pecking order is established.

Static Analyzers to the Rescue

I must confess.  I tend to do horribly at this sort of thing.  Despite having relatively good memory retention ability in theory, I have a critical Achilles Heel in this regard.  Specifically, I can only retain information that interests me.  And building up a massive arsenal of programming language “how-could-yous” for dueling purposes just doesn’t interest me.  It doesn’t solve any problem that I have.

And, really, why should it?  Early in my career, I figured out the joy of static analyzers in pretty short order.  Just as the ubiquity of search engines means I don’t need to memorize algorithms, the presence of static analyzers saves me from cognitively carrying around giant checklists of programming sins to avoid.  I rejoiced in this discovery.  Suddenly, I could solve interesting problems and trust the equivalent of programmer spell check to take care of the boring stuff.

Oh, don’t get me wrong.  After the analyzers slapped me, I internalized the lessons.  But I never bothered to go out of my way to do so.  I learned only in response to an actual, immediate problem.  “I don’t like seeing warnings, so let me figure out the issue and subsequently avoid it.”

Read More

By

CodeIt.Right Rules, Explained Part 3

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, take a look at CodeIt.Right and see how you can automate checks for and enforcement of these rules.

In what has become a series of posts, I have been explaining some CodeIt.Right rules in depth.  As with the last post in the series, I’ll start off by citing two rules that I, personally, follow when it comes to static code analysis.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

It may seem as though I’m playing rhetorical games here.  After all, I could simply say, “learn the reasoning behind all suggested fixes.”  But I want to underscore the decision you face when confronted with static analysis feedback.  In all cases, you must actively choose to ignore the feedback or address it.  And for both options, you need to understand the logic behind the suggestion.

In that spirit, I’m going to offer up explanations for three more CodeIt.Right rules today.

Read More