DaedTech

Stories about Software

By

Habits that Help Code Quality

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  If you like posts about static analysis, code quality, and the like, check out the rest of the blog.

When I’m called in to do a strategic assessment of a codebase, it’s never the result of everything being awesome.  That is, no one calls me up and says, “we’re ahead of schedule, under budget, and knocking it out of the park, so can you come in and tell us what you think of our code?”  Rather, I get calls when something isn’t going according to plan and the business people involved want to get some insight into what underlying causes there are in the code and in the team’s approach.

When the business gets involved this way, there is invariably a fiscal operational concern, either overtly or lurking just beneath the surface.  I’ll roll this up to the general consideration of “total cost of ownership” for the codebase.  The business is thus asking, “why are things proving more expensive than we thought?”

ProfitsTrendingUp

Typically, I come in, size up the situation, quantify it objectively, and then use analogies and examples to make clear what’s happening.  After I do this, pretty much without exception, the decision-makers to whom I’m speaking want to know what small things they can do, internally, to course correct.  This makes sense when you think about it.  If your doctor told you that your health outlook wasn’t great, you’d cross your fingers and say, “but I can fix it by changing my diet and exercise a little, right?”  You wouldn’t throw yourself on the table and say, “cut me open and make sure whatever you do is expensive!”

I am thus frequently asked, by both developers and by management, “what are the little things we can do to improve and maintain code quality.”  As such, this seems like excellent fodder for a blog post.  Here are my tips, based on years of observation of what correlates with healthy codebases and what correlates with distressed ones.

Read More

By

Why Automate Code Reviews?

Editorial Note:  I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  This is a new partner for whom I’ve started writing recently.  They offer automated code review and documentation tooling in the .NET space, so if that interests you, I encourage you to take a look.

In the world of programming, 15 years or so of professional experience makes me a grizzled veteran.  That certainly does not hold for the work force in general, but youth dominates our industry via the absolute explosion of demand for new programmers.  Given the tendency of developers to move around between projects and companies, 15 years have shown me a great deal of variety.

Lifer

Perhaps nothing has exemplified this variety more than the code review.  I’ve participated in code reviews that were grueling, depressing marathons.  On the flip side, I’ve participated in ones where I learned things that would prove valuable to my career.  And I’ve seen just about everything in between.

Our industry has come to accept that peer review works.  In the book Code Complete, author Steve McConnell cites it, in some circumstance, as the single most effective technique for avoiding defects.  And, of course, it helps with knowledge transfer and learning.  But here’s the rub — implemented poorly, it can also do a lot of harm.

Today, I’d like to make the case for the automated code review.  Let me be clear.  I do not view this as a replacement for any manual code review, but as a supplement and another tool in the tool chest.  But I will say that automated code review carries less risk than its manual counterpart of having negative consequences.

Read More

By

Static Analysis for Small Business

Editorial note: I originally wrote this post for the NDepend blog.  Check out the original, here, at their site.  While you’re there, download a trial of NDepend and give it a spin.  It’s one of the most important tools in my software consultant’s tool chest.

I was asked recently, kind of off the cuff, whether I thought that static analysis made sense for small business.  I must admit that the first response that popped into my head was a snarky one: “no, you can only reason about your code once you hit 1,000 employees.”  But I understood that the meat of the question wasn’t whether analysis should be performed but whether it was worth an investment, in terms of process and effort, but particularly in terms of tooling cost.

And, since that is a perfectly reasonable question, I bit my tongue against the snark.  I gave a short answer more or less of “yes,” but it got me thinking in longer form.  And today’s post is the result of that thinking.

I’d like to take you through some differences between small and large organizations.  And in looking at those differences, I will make the perhaps counter-intuitive case that small companies/groups actually stand to benefit more from an investment in static analysis tools and incorporation of the same into their software development processes.

Read More

By

Static Analysis and The Other Kind of False Positives

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at the NDepend site.  If you’re a fan of static analysis tools, do yourself a favor and take a look at the NDepend offering while you’re there.

A common complaint and source of resistance to the adoption of static analysis is the idea of false positives.  And I can understand this.  It requires only one case of running a tool on your codebase and seeing 27,834 warnings to color your view on such things forever.

There are any number of ways to react to such a state of affairs, though there are two that I commonly see.  These are as follows.

  1. Sheepish, rueful acknowledgement: “yeah, we’re pretty hopeless…”
  2. Defensive, indignant resistance: “look, we’re not perfect, but any tool that says our code is this bad is nitpicking to an insane degree.”

In either case, the idea of false positives carries the day.  With the first reaction, the team assumes that the tool’s results are, by and large, too advanced to be of interest.  In the second case, the team assumes that the results are too fussy.  In both of these, and in the case of other varying reactions as well, the tool is providing more information than the team wants or can use at the moment.  “False positive” becomes less a matter of “the tool detected what it calls a violation but the tool is wrong” and more a matter of “the tool accurately detected a violation, but I don’t care right now.”  (Of course, the former can happen as well, but the latter seems more frequently to serve as a barrier to adoption and what I’m interested in discussing today).

Is this a reason to skip the tool altogether?  Of course not.  And, when put that way, I doubt many people would argue that it is.  But that doesn’t stop people form hesitating or procrastinating when it comes to adoption.  After all, no one is thinking, “I want to add 27,834 things to the team’s to do list.”  Nor should they — that’s clearly not a good outcome.

Fumigation

With that in mind, let’s take a look at some common sources of false positives and the ways to address them.  How can you ratchet up the signal to noise ratio of a static analysis tool so that is valuable, rather than daunting?

Read More

By

Logging for Continuous Integration

Editorial Note: I originally wrote this post for the LogEntries blog.  Check out the original here, at their site.  While you’re there, take a look at the product offering, which includes storage, aggregation, and sophisticated search of your log information.

If you look at the title of this post, you’re probably thinking to yourself, “huh, that’s never really come up.”  Of course, it’s possible that you’re not.  But, in my travels as a consultant helping dev teams with practice and gap analysis, I’ve never had anyone ask me, “what do you recommend in terms of a logging solution for continuous integration?”

But hey, this is an easily solved problem, right?  After all, continuous integration means Jenkins, and Jenkins has an application log.  Perhaps that’s why no one is asking about it!  Now, all that’s left is to sit back and bask in the glow of every compiler warning your application has ever generated since the dawn of time.

Jenkins

What Actually Is Continuous Integration?

Now, I know what you’re thinking.  TeamCity is another continuous integration tool, and it also has logs.  Or what about TFS or Bamboo?  Jenkins doesn’t have sole possession of the continuous integration mind share.  There are any number of products designed for this purpose.

And thus we arrive at a popular misconception.

Continuous integration is not Jenkins.  It’s not Team City.  It’s not TFS or Bamboo.  And it’s also not the non-empty set that results from choosing one of the tools.  Continuous integration is a practice, not a tool.  And it’s actually a simple practice at that.

If you go back to basics via Wikipedia, you’ll find this definition.

Continuous integration (CI) is the practice, in software engineering, of merging all developer working copies to a shared mainline several times a day.

Notice it does not say, “CI is where you hook your Github account up to Jenkins.”  There is no mention of any particular tool; it just describes the idea of developers’ source code never getting very far out of sync.  Cringe (appropriately), but you could just as easily achieve this by having developers collaborate using Notepad to edit source files housed on a shared Dropbox account.

Read More