DaedTech

Stories about Software

By

Static Analysis for the Build Machine?

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re over there, download a trial of NDepend if you want to see some quantification of the tech debt in your codebase.

I remember my earliest experiences with static analysis.  Probably a decade ago, I started to read about it during grad school and to poke around with it at work.  Immediately, I knew I had discovered a powerful advantage for programmers.  These tools automated knowledge.

While I felt happy to share the knowledge with coworkers, their lack of interest didn’t disappoint me.  After all, it felt as though I had some sort of trade secret.  If those around me chose not to take advantage, I would shine by comparison.  (I have since, I’d like to think, matured a bit.)  Static analysis became my private competitive advantage — Sabermetrics for programmers.

So as you can imagine, running it on the build machine would not have occurred to me.  And that assumes a sophisticated enough setup that doing so made sense (not really the case back then).  Static analysis was my ace in the hole for writing good code — a personal choice and technique.

Fast forward a decade.  I have now grown up, worked with many more teams, and played many more roles.  And, of course, the technological landscape has changed.  All of that combined to cause a complete reversal of my opinion.  Static analysis and its advantages matter far too much not to use it on the build machine.  Today, I’d like to expand on that a bit.

What Does It Mean to Have Static Analysis on the Build Machine?

First point of expansion should be to mention what this means.  In the general sense, this means that the machine responsible for assembling your code into something deployable executes static analysis on said code.  Back when I first discovered static analysis, this would have meant pulling the code from source on that machine, running the analysis and building it.  These days, hopefully your build involves more automation than this.

To get a bit more detailed, a commit may trigger your build machine to kick off a build.  That could just mean compiling your code and packing it into a deployable format.  But shops commonly do other things, such as execute automated unit tests and check on code coverage.  And you can add static analysis to the mix.

You can certainly get creative with how you do this.  But two common (and not mutually exclusive) approaches include having the build generate a code health report and failing the build on egregious violations.

Read More

By

Elements of Helpful Code Documentation

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, check out GhostDoc, which can automatically generate help files for you.

If you spend enough years writing software, sooner or later, your chosen vocation will force you into reverse engineering.  Some weird API method with an inscrutable name will stymie you.  And you’ll have to plug in random inputs and examine the outputs to figure out what it does.

Clearly, this wastes your time.  Even if you enjoy the detective work, you can’t argue that an employer or client would view this as efficient.  Library and API code should not require you to launch a mystery investigation to determine what it does.

Instead, such code should come with appropriate documentation.  This documentation should move your focus from wondering what the code does to contemplating how best to leverage it.  It should make your life easier.

But what constitutes appropriate documentation?  What particular characteristics does it have?  In this post, I’d like to lay out some elements of helpful code documentation. Read More

By

The Relationship between Static Analysis and Continuous Testing

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and give it a try.

As an adult, I have learned that I have an introvert type personality.  I do alright socially, don’t mind public speaking, and do not (I don’t think) present as an awkward person.  So, learning about this characterization surprised me somewhat, but only until I fully understood.

I won’t delve into the finer points of human psychology here, but suffice it to say that introverts prefer to process and grok questions before responding.  This describes me to a tee.  However, working as a consultant and giving frequent advice clashes with this and has forced me to develop somewhat of a knack for answering extemporaneously.  Still, you might ask me just the right question to cause me to cock my head, blink at you, and frown.

I received just such a question the other day.  The question, more or less, was, “if we have continuous testing, do we really need static analysis?”  And, just like that, I was stumped.  This didn’t square, and I wanted time to think on that.  Luckily, I’ve had a bit of time.  (This is why I love blogging.)

Continuous Testing, Defined

Before we go into the relationship between the concepts, let’s first clarify them.  That way we’ll have no inadvertent misunderstandings via buzz word.

My first introduction to continuous testing was through a tool called NCrunch.  It bills itself as an “automated concurrent testing tool,” which certainly offers more precision than “continuous testing.”  NCrunch is awesome.  If you practice TDD or have unit tests, give it a look.  It runs your tests continuously as you write code, providing you real-time, in-IDE, visual feedback as you make changes.  Accidentally delete a line and watch the side of your editor window go immediately red.

In the interceding years, I have seen a broadening of this term to be a follower for concepts such as continuous integration (CI) and continuous deployment (CD).  In agile environments, we integrate constantly and, ideally, deploy (somewhere) constantly.  Why give testing short shrift?  With continuous testing, your environments constantly pepper your build candidates with runtime tests, providing early feedback instead of near the end of the sprint.

So we have two concepts that we can generalize.  The first concept involves tightening the unit test feedback loop, while the second involves the same for integration and acceptance tests.  In both cases, we consistently test our code’s runtime behavior with a fast-feedback loop.

Read More

By

The Case for a Team Standard

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, check out CodeIt.Right and give it a try.

In professional contexts, I think that the word “standard” has two distinct flavors.  So when we talk about a “team standard” or a “coding standard,” the waters muddy a bit.  In this post, I’m going to make the case for a team standard.  But before I do, I think it important to discuss these flavors that I mention.  And keep in mind that we’re not talking dictionary definition as much as the feelings that the word evokes.

First, consider standard as “common.”  To understand what I mean, let’s talk cars.  If you go to buy a car, you can have an automatic transmission or a standard transmission.  Standard represents a weird naming choice for this distinction since (1) automatic transmissions dominate (at least in the US) and (2) “manual” or “stick-shift” offer much better descriptions.  But it’s called “standard” because of historical context.  Once upon a time, automatic was a new sort of upgrade, so the existing, default option became boringly known as “standard.”

In contrast, consider standard as “discerning.”  Most commonly you hear this in the context of having standards.  If some leering, creepy person suggested you go out on a date to a fast food restaurant, you might rejoin with, “ugh, no, I have standards.”

Now, take these common contexts for the word to the software team room.  When someone proposes coding standards, the two flavors make themselves plain in the team members’ reactions.  Some like the idea, and think, “it’s important to have standards and take pride in our work.”  Others hear, “check your creativity at the gate, because around here we write standard, default code.”

What I Mean by Standard

Now that I’ve drawn the appropriate distinction, I feel it appropriate to make my case.  When I talk about the importance of a standard, I speak with the second flavor of the word in mind.  I speak about the team looking at its code with a discerning attitude.  Not just any code can make it in here — we have standards.

These can take somewhat fluid forms, and I don’t mean to be prescriptive.  The sorts of standards that I like to see apply to design principles as much as possible and to cosmetic concerns only when they have to.

For example, “all non-GUI code should be test driven” and “methods with more than 20 lines should require a conversation to justify them” represent the sort of standards I like my teams to have.  They say, “we believe in TDD” and “we view long methods as code smells,” respectively.  In a way, they represent the coding ethos of the group.

On the other side of the fence lie prescriptions like, “all class fields shall be prepended with underscores” and “all methods shall be camel case.”  I consider such concerns cosmetic, since they concern appearance and not design or runtime behavior.  Cosmetic concerns are not important… unless they are.  If the team struggles to read code and becomes confused because of inconsistency, then such concerns become important.  If the occasional quirk presents no serious readability issues, then prescriptive declarations about it stifle more than they help.

Having standards for your team’s work product does not mean mandating total homogeneity.

Read More

By

When is It Okay to Turn off Static Analysis Guidance

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, download CodeIt.Right and give it a try.

The balance among types of feedback drives some weird interpersonal dynamics and balances.  For instance, consider the rather trite (if effective) management technique of the “compliment sandwich.”  Managers with a negative piece of feedback precede and follow that feedback with compliments.  In that fashion, the compliments form the “bun.”

Different people and different groups have their preferences for how to handle this.  While some might bend over backward for diplomacy others prefer environments where people hurl snipes at one another and simply consider it “passionate debate.”  I have no interest arguing for any particular approach — only in pointing out the variety.  As it turns out, we humans find this subject thorny.

To some extent, this complicated situation extends beyond human boundaries and into automated systems.  While we might not take quite the same umbrage as we would with humans, we still get frustrated.  If you doubt this, I challenge you to tell me that you have never yelled at a compiler because you were sure your code had no errors.  I thought so.

So from this perspective, I can understand the frustration with static analysis feedback.  Often, when you decide to enable a new static analysis engine or linting tool on a codebase, the feedback overwhelms.  28,326 issues the code can demoralize anyone.  And so the temptation emerges to recoil from this feedback and turn off the tool.

But should you do this?  I would argue that usually, you should not.  But situations do exist when disabling a static analyzer makes sense.  Today, I’ll walk through some examples of times you might suppress such a warning.

Read More