DaedTech

Stories about Software

By

Measure Your Code to Get Back on Track

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.

When I’m called in for strategy advice on a codebase, I never arrive to find a situation where all parties want to tell me how wonderfully things are going.  As I’ve mentioned before here, one of the main things I offer with my consulting practice is codebase assessments and subsequent strategic recommendations.

Companies pay for such a service when they have problems, and those problems drive questions.  “Should we scrap this code and start over, or can we factor toward a better state?”  “Can we move away from framework X, or are we hopelessly tied to it?”  “How can we evolve without doing a forklift upgrade?”

To answer these questions, I assess their code (often using NDepend as the centerpiece for querying the codebase) and synthesize the resultant statistics and data.  I then present a write-up with my answer to their questions.  This also generally includes a buffet of options/tactics to help them toward their goals.  And invariably, I (prominently) offer the option “instrument your code/build with static analysis to raise the bar and prevent backslides.”

I find it surprising and a bit dismaying how frequently clients want to gloss over this option in favor of others.

Using the Observer Effect for Good

Let me digress for a moment, before returning to the subject of preventing backslides.  In physics/science, experimenters use the term “observer effect” to describe an experimental problem.  This occurs when the act of measuring a phenomenon changes its behavior, inextricably linking the two.  This presents a problem, and indeed a paradox, for scientists.  The mechanics of running the experiment contaminate the results of the experiment.

To make this less abstract, consider the example mentioned on the Wikipedia page.  When you use a tire pressure gauge, you measure the pressure, but your measurement lets some of the air out of the tire.  You will never actually know what, exactly, the pressure was before you ran the experiment.

While this creates a problem for scientists, businesses can actually use it to their advantage.  Often you will find that the simple act of measuring something with your team will create improvement.  The agile concept of “big, visible charts” draws inspiration from this premise.

In discussing this principle, I frequently cite a dead simple example.  On a Scrum team, the product owner has ultimate responsibility for making decisions about the software’s behavior.  The team thus needs frequent access to this person, despite the fact that product owners often have many responsibilities and limited time.  I recall a team who had trouble getting this access, and put a big piece of paper on the wall that listed the number of hours the product owner spent with the team each day.

The number started low and improved noticeably over the course of a few weeks with no other intervention at all.

Read More

By

Using NDepend to Avoid Technical Debt

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.

The term “technical debt” has become ubiquitous in the programming world.  In the most general sense, it reflects the idea that you’re doing something easy in the moment, but that you’re going to pay for, with interest, in the long run.  Conceived this way, to avoid technical debt would mean to avoid taking out these “time loans” in general.

There’s a subtle bit of friction, however, when using the (admittedly very helpful) concept of technical debt to communicate with business stakeholders.  For them, carrying debt is generally a standard operating procedure and often a tool, and it doesn’t have quite the same connotation.  When developers talk about incurring technical debt, it’s overwhelmingly in the context of “we’re doing something ugly and dirty to get this thing shipped, and man are we going to pay for it later.”  That’s a far cry from, “I’m going to finance a fleet of trucks so that we can expand our delivery operation regionally,” that an accountant or executive might understand.  Taking on technical debt is colloquially more akin to borrowing money from a guy that breaks thumbs.

The reason there’s this slight dissonance between the usages is that technical debt in codebases is a lot more likely to be incurred unwittingly (or improvidently).  The reason, in turn, for this could make up the subject of an entire post, but suffice it to say that the developers are often shielded from business decisions and consequences.  It is thus harder for them to be party to all factors of such a tradeoff — a role often played by people with titles like “business analyst” or “project manager.”

In light of this, let’s talk about avoiding the “we break thumbs” variety of tech debt, and how NDepend can help.  This sort of tech debt takes the form of “things you realize probably aren’t great, but you might not realize how long-term damaging they are.”

Read More

By

What Is Reasonable to Expect from Your IDE?

Editorial Note: I originally wrote this post for the SmartBear blog.  You can check out the original here, at their site.

If you’re a software developer, there’s a decent chance you’ve been embroiled in the debate over integrated development environments (IDEs) versus text editors.  Users of IDEs will tell you that they facilitate productivity to a degree that makes them indispensable, whereas users of text editors will tell you that the IDEs are too big and bloated, and hide too much of the truth of what’s happening from you.  In some cases, depending on the tech stack and personalities involved, this can become a pretty serious debate.

I have no intention whatsoever of wading into the middle of that debate here today.  Rather, I’d like to tackle this subject from a slightly different angle.  Let’s take at face value that text editors are comparably feature-poor but nimble, and that IDEs are comparably feature-rich but laden with abstraction.  Framed as a tradeoff, then, it becomes clear that realizing the benefits of one choice is critical for justifying it.  To put it more concretely, if you’re going to incur the overhead of an IDE, then it had better be doing you some good.

With this in mind, what should you expect from an IDE?  What should it do in order to justify its existence, and what should be a deal breaker if missing?   Let’s take a look at what’s reasonable to expect from IDEs.

Read More

By

Plugging Leaky Abstractions

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and play around with it.

In 2002, Joel Spolsky coined something he called “The Law of Leaky Abstractions.”  In software, an “abstraction” hides complexity of an underlying system from those using the abstraction.  Examples abound, but for a quick understanding, think of an ORM hiding from you the details of database interaction.

The Law of Leaky Abstractions states that, “all non-trivial abstractions, to some degree, are leaky.”  “Leaky” indicates that the abstraction fails to adequately hide its internal details.  Imagine, for instance, that while modifying the objects generated by your ORM, you suddenly needed to manage the particulars of some SQL query.  The abstraction leaked, forcing you to understand the details that it was supposed to hide.


Spolsky’s point may inspire a fatalistic feeling.  After all, if the things are doomed to leak, why bother with them in the first place?  But I like to consider it a caution against chasing perfection rather than a lament.

Abstractions in software help us the same way figurative language helps our prose.  Metaphors and analogies offer ease of understanding, but at the accepted price of lost precision.  If you press a metaphor enough, it will inevitably break down.  But that doesn’t render metaphors useless — far from it.

Thus, if you have a leaky abstraction, you can take steps to “plug” it, so to speak.  Spolsky says it himself, right in the law he coined: “all non-trivial abstractions are, to some degree, leaky.”  We have the ability to lessen that degree.

Read More

By

Static Analysis Isn’t Just for Techies

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download a trial of NDepend and give it a spin.

I do a lot of work with and around static analysis tools.  Obviously, I write for this blog.  I also have a consulting practice that includes detailed codebase and team fact-finding missions, and I have employed static analysis aplenty when I’ve had run of the mill architect gigs.  Doing all of this, I’ve noticed that the practice gets a rap of being just for techies.

Beyond that even, people seem to perceive static analysis as the province of the uber-techie: architects, experts, and code statistics nerds.  Developing software is for people with bachelors’ degrees in programming, but static analysis is PhD-level stuff.  Static analysis nerds go off, dream up metrics, and roll them out for measurement of developers and codebases.

This characterization makes me sad — doubly so when I see something like test coverage or cyclomatic complexity being used as a cudgel to bonk programmers into certain, predictable behaviors.  At its core, static analysis is not about standards compliance or behavior modification, though it can be used for those things.  Static analysis is about something far more fundamental: furnishing data and information about the codebase (without running the code).  And wanting information about the code is clearly something everyone on or around the team is interested in.

To drive this point home, I’d like to cite some examples of less commonly known value propositions for static analysis within a software group.  Granted, all of these require a more indirect route than, “install the tool, see what warnings pop up,” but they’re all there for the realizing, if you’re so inclined.  One of the main reasons that static analysis can be so powerful is scale — tools can analyze 10 million lines of code in minutes, whereas a human would need months.

Read More