Editorial Note: I originally wrote this post for the NDepend blog. You can check out the original here, at their site. While you’re there, have a look around at some of the documentation around code metrics and queries.
I’ve made no secret of, and even frequently referred to my consulting practice, including aspects of IT management consulting. In short, one of my key offerings is to help strategic decision makers (CIOs/CTOs, dev managers, etc) make tough or non-obvious calls about their applications and codebases. Can we migrate this easily to a new technology, or should we start over? Are we heading in the right direction with the new code that we’re writing? We’d like to start getting our codebase under test, but we’re not sure how (un) testable the code is — can you advise?
This is a fairly niche position that’s fairly high on the organizational trust ladder, so it’s good work to be had. Because of that, I recently got a question along the lines of, “how do you get that sort of work and then succeed with it?” In thinking about the answer, I realized it would make a good blog post, specifically for the NDepend blog. I think of this work as true consulting, and NDepend is invaluable to me as I do it.
Before I tell you about how this works for me in detail, let me paint a picture of what I think of as a market differentiator for my specific services. I’ll do this by offering a tale of two different consulting pitfalls that people seem to fall into if tasked with the sorts of high-trust, advisory consulting engagements.
Editorial Note: I originally wrote this post for the NDepend blog. Head over there to check out the original. NDepend is a tool that’s absolutely essential to my IT management consulting practice, and it’s a good find for any developer and aspiring architects in particular. Give it a look.
Here’s a scene that’s familiar to any software developer. You sit down to work with the source code of a new team or project for the first time, pull the code from source control, build it, and then notice that there are literally thousands of compiler warnings. You shudder a little and ask someone on the team about it, and he gives a shrug that is equal parts guilty and “whatcha gonna do?” You shake your head and vow to get the warning situation under control.
If you’re not a software developer, what’s going on here isn’t terribly hard to understand. The compiler is the thing that turns source code into a program, and the compiler warning is the compiler’s way of saying, “you’ve done something icky here, but not icky enough to be a show-stopping error.” If the team’s code has thousands of compiler warnings, there’s a strong likelihood that all is not well with the code base. But getting that figure down to zero warnings is going to be a serious effort.
As I’ve mentioned before on this blog, I consult on different kinds of software projects, many of which are legacy rescue efforts. So sitting down to a new (to me) code base and seeing thousands of warnings is commonplace for me. When I point the runaway warnings out to the team, the observation is generally met with apathetic resignation, and when I point it out to management, the observation is generally met with some degree of shock. “Well, let’s get it fixed, and why is it like this?!” (Usually, they’re not shocked by the idea that there are warts — they know that based on the software’s performance and defect counts — but by the idea that such a concrete, easily metric exists and is being ignored.)
Editorial Note: I originally wrote this post for the NDepend blog. Head on over and check out the original. If software architecture interests you or you aspire to that title, there’s a pretty focused set of topics that will interest you.
Oh how I hope you don’t measure developer productivity by lines of code. As Bill Gates once ably put it, “measuring software productivity by lines of code is like measuring progress on an airplane by how much it weighs.” No doubt, you have other, better reasoned metrics that you capture for visible progress and quality barometers. Automated test coverage is popular (though be careful with that one). Counts of defects or trends in defect reduction are another one. And of course, in our modern, agile world, sprint velocity is ubiquitous.
But today, I’d like to venture off the beaten path a bit and take you through some metrics that might be unfamiliar to you, particularly if you’re no longer technical (or weren’t ever). But don’t leave if that describes you — I’ll help you understand the significance of these metrics, even if you won’t necessarily understand all of the nitty-gritty details.
Perhaps the most significant factor here is that the metrics I’ll go through can be tied, relatively easily, to stakeholder value in projects. In other words, I won’t just tell you the significance of the metrics in terms of what they say about the code. I’ll also describe what they mean for people invested in the project’s outcome.