Measure Your Code to Get Back on Track
Editorial Note: I originally wrote this post for the NDepend blog. You can check out the original here, at their site.
When I’m called in for strategy advice on a codebase, I never arrive to find a situation where all parties want to tell me how wonderfully things are going. As I’ve mentioned before here, one of the main things I offer with my consulting practice is codebase assessments and subsequent strategic recommendations.
Companies pay for such a service when they have problems, and those problems drive questions. “Should we scrap this code and start over, or can we factor toward a better state?” “Can we move away from framework X, or are we hopelessly tied to it?” “How can we evolve without doing a forklift upgrade?”
To answer these questions, I assess their code (often using NDepend as the centerpiece for querying the codebase) and synthesize the resultant statistics and data. I then present a write-up with my answer to their questions. This also generally includes a buffet of options/tactics to help them toward their goals. And invariably, I (prominently) offer the option “instrument your code/build with static analysis to raise the bar and prevent backslides.”
I find it surprising and a bit dismaying how frequently clients want to gloss over this option in favor of others.
Using the Observer Effect for Good
Let me digress for a moment, before returning to the subject of preventing backslides. In physics/science, experimenters use the term “observer effect” to describe an experimental problem. This occurs when the act of measuring a phenomenon changes its behavior, inextricably linking the two. This presents a problem, and indeed a paradox, for scientists. The mechanics of running the experiment contaminate the results of the experiment.
To make this less abstract, consider the example mentioned on the Wikipedia page. When you use a tire pressure gauge, you measure the pressure, but your measurement lets some of the air out of the tire. You will never actually know what, exactly, the pressure was before you ran the experiment.
While this creates a problem for scientists, businesses can actually use it to their advantage. Often you will find that the simple act of measuring something with your team will create improvement. The agile concept of “big, visible charts” draws inspiration from this premise.
In discussing this principle, I frequently cite a dead simple example. On a Scrum team, the product owner has ultimate responsibility for making decisions about the software’s behavior. The team thus needs frequent access to this person, despite the fact that product owners often have many responsibilities and limited time. I recall a team who had trouble getting this access, and put a big piece of paper on the wall that listed the number of hours the product owner spent with the team each day.
The number started low and improved noticeably over the course of a few weeks with no other intervention at all.