DaedTech

Stories about Software

By

With Code Metrics, Trends are King

Editorial Note: I originally wrote this post for the NDepend blog.  Head over there to check out the original.  NDepend is a tool that’s absolutely essential to my IT management consulting practice, and it’s a good find for any developer and aspiring architects in particular.  Give it a look.

Here’s a scene that’s familiar to any software developer.  You sit down to work with the source code of a new team or project for the first time, pull the code from source control, build it, and then notice that there are literally thousands of compiler warnings.  You shudder a little and ask someone on the team about it, and he gives a shrug that is equal parts guilty and “whatcha gonna do?”  You shake your head and vow to get the warning situation under control.

Fumigation

If you’re not a software developer, what’s going on here isn’t terribly hard to understand.  The compiler is the thing that turns source code into a program, and the compiler warning is the compiler’s way of saying, “you’ve done something icky here, but not icky enough to be a show-stopping error.”  If the team’s code has thousands of compiler warnings, there’s a strong likelihood that all is not well with the code base.  But getting that figure down to zero warnings is going to be a serious effort.

As I’ve mentioned before on this blog, I consult on different kinds of software projects, many of which are legacy rescue efforts.  So sitting down to a new (to me) code base and seeing thousands of warnings is commonplace for me.  When I point the runaway warnings out to the team, the observation is generally met with apathetic resignation, and when I point it out to management, the observation is generally met with some degree of shock.  “Well, let’s get it fixed, and why is it like this?!”  (Usually, they’re not shocked by the idea that there are warts — they know that based on the software’s performance and defect counts — but by the idea that such a concrete, easily metric exists and is being ignored.)

 

Getting to Zero

At this point, I usually recommend against a targeted, specific effort to get down to zero warnings.  Don’t get me wrong; having zero warnings is a good goal.  But jamming on the development brakes and working tirelessly to bring the count to zero is fraught with potential problems.

  • Some of the warnings may require serious redesign.
  • Addressing some of the warnings may create production risk, particularly if testing is not comprehensive.
  • There is no business value, per se, to fixing compiler warnings.
  • The effort involved in getting to zero may be a lot more significant than anyone realizes up front.
  • “Get to zero” is easily gamed by altering the warning settings of the compiler for this code base.

The easiest time to address a warning is at the moment that it’s introduced.  “Oh, this line of code I just wrote results in a compiler warning, so I should write it in a different way.”  If you write that line of code and then don’t notice that it’s generated a warning, not only will it fall out of your short term memory at some point, but you will probably write other lines of code that depend on that one, either explicitly or implicitly.  The easy reconsideration calcifies in the code and becomes harder to extract later.

It is at this point that managers wonder why people let warnings go and how they don’t notice them.  And this is where the number of warnings comes in.  If you’re working in a warning-free code base, a compile that generates a warning will be memorable.  You’ll notice the warning and fix it.  On the other hand, if you’re working in a code base with thousands of warnings, the number will constantly be changing and it’s unlikely that you’ll even notice whether it was you or someone else that added warning number 3,494.  It’s such a daunting figure that you’d probably only notice the introduction of dozens or even hundreds of new ones.

So for a team with a considerable number of compiler warnings, the most sensible approach is probably a slow, steady drawing down of warnings with each iteration/release of the software.  Putting development on hold to engage in a massive cleanup is unlikely to be practical, but setting a course for general improvement is not only practical — it’s essential.

Code Metrics over Time

Compiler warnings is one of the most simple metrics when it comes to a code base, which is why I’m talking about it here.  But metrics, even simple ones, tend not to tell a compelling story when measured in a vacuum.  Is 3,494 warnings an excessive number?  Well, sure, assuming the team has all along been shooting for none.  But if the team only recently regarded this as worth paying attention to, and has pared that down from 25,000 over the course of a few months, then it’s doing a pretty good job.

For this reason, I strongly recommend that teams set up static analysis solutions that show trending and that they evaluate themselves based on the trends.  So for a group ‘starting’ at 25,000 warnings, the developers can chase a steady decline and for a group starting at 0 warnings, the presence of even one is clear and visible.

This applies to other metrics that you may want to capture as well.  If you measure test coverage (a practice of which I’m not necessarily a huge fan), whether 50% coverage is ‘good’ or ‘bad’ is going to depend on how much coverage you had a week or a month ago.  What you’re really looking for isn’t being able to point to a number on some readout and say “we have 95%” coverage.  What you need to know is whether code is being added alongside tests and whether previously untested legacy code is being characterized.

If you want something to improve, the first thing to do is measure it, and the second thing to do is continue to measure it.  So sit down with your team and decide on some goals and how to measure progress toward them.  This is important whether you’re a brand new, green-field team or a well-established team in maintenance mode.  And with the state of code analysis tools these days, there’s a way to measure pretty much anything you can dream up.

 

13 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Matt
8 years ago

Love the metrics approach, Eric, thanks. Any idea if Visual Studio provides this, especially trends over time? Or tool recommendations?

Also, you hit the nail on the head, here:

“If you’re working in a warning-free code base, a compile that generates a warning will be memorable.”

When people are in a habit of writing good, quality code, they’re less likely to let things like warnings slip through. With a culture/history of laziness, lack of ownership etc., it’s the exact opposite. Starting off correctly, if possible, makes a world of difference.

Matt
8 years ago
Reply to  Matt

amusing side note: apparently I can upvote my own comments – ha. 🙂

Erik Dietrich
8 years ago
Reply to  Matt

The original article I wrote was on the NDepend website, and this tool offers exactly what I’m talking about. As Stefan points out, SonarQube is another option, though Sonar’s product offering is a broader one across a lot of languages where NDepend is more C#/Studio-specific.

Marc Clifton
8 years ago

Just write your code in Python or Ruby or similar duck typed script language. No compiler warnings, mwahaha!

Erik Dietrich
8 years ago
Reply to  Marc Clifton

🙂

Stefan Joachimsthaler
Stefan Joachimsthaler
8 years ago

Hi Matt,

I can reccomend SonarQube to track the metrics. There are several plugins to integrate it with Visual Studio and there is also a plugin for Jenkins so it can be run on the CI server at every automatic build.

Erik Dietrich
8 years ago

Have you run SonarQube for a team, or for yourself as an individual (or both)? Just idly curious.

Stefan Joachimsthaler
Stefan Joachimsthaler
8 years ago
Reply to  Erik Dietrich

At first we picked a few developers to check it out. Then we integrated it to our Jenkins build so every developer was involved.
I think it is not necessary for an individual developer to use a tool like sonar but with a team everybody ist more engaged to write clean code.

daf
daf
8 years ago

“At this point, I usually recommend against a targeted, specific effort to get down to zero warnings.”
Just recently, I did the exact opposite at a client. I branched their code, fixed all +- 1000 warnings and got it working on their TFS with a checked build — and then refused to add any of their own branches until they sorted them out.
Turns out, it’s really not that difficult to slay 1000 warnings if there’s an incentive. Their code is now building in TFS.

Erik Dietrich
8 years ago
Reply to  daf

I applaud that, frankly. If you can get a client to give you that kind of authority to clear the hurdles, that’s good stuff!

Were potential regressions not a concern? I wouldn’t think the actual removal of the warnings would be particularly difficult — I’d be more concerned with the impact of what you have to change in order to do it.

daf
daf
8 years ago
Reply to  Erik Dietrich

I’d say that the potential hidden issues are a greater concern! 99% of warnings I’ve come across (most especially in the .net world) can be sorted out without any fear of regression. Most of the noise in warning output is just that: noise. Unused variables, classes with methods that hide those of base classes without the “new” keyword (which is implied anyway when you do that — and normally means that the original author didn’t understand overrides, so I just replace with proper virtual/override to prevent the confusion that will occur some time down the line — because that’s what… Read more »

Andy Bailey
Andy Bailey
8 years ago

When I first joined the team I am working with now I switched on lint warnings on my own and fixed the ~2000 warnings myself before commuting the code changes. That took me all of 2 hours. Before committing the compiler settings I brought the subject of compiler warnings at the next stand-up meeting with the aim of discussing it more completely at the next sprint meeting. The idea of being strict about things like the correct use of generics etc had a lukewarm reception, the biggest objection being that it would result in more work. So I told them… Read more »

Erik Dietrich
8 years ago
Reply to  Andy Bailey

It sounds like maybe that’s the consensus approach to knocking this thing out. “Beg forgiveness, don’t ask permissions.” Just go ahead, do it, and everything will probably work out. I dig it.