DaedTech

Stories about Software

By

DaedTech Digest: Static Analysis, Doco and Dependency Death Stars

Happy Friday, everybody.  I’m still figuring this digest thing out, so please bear with me.  But no matter how I iterate, what you’ll get is an aggregated link to posts that I’ve written for my Hit Subscribe business.

I’m thinking I’ll do picks each week as well as the digests.  You know how podcast panelists do “picks” at the end of a lot of podcasts?  I’ll give you some picks each week.  At least, unless this turns out to be a bad idea, in which case, I’ll stop.

Picks

  • Jogging without headphones.  For years, I’ve always doubled up on productivity by listening to podcasts or audio books while jogging, if not watching Pluralsight courses.  But recently a terrible pair of bluetooth headphones (seriously, don’t buy them — shop around for a competitor) broke, and I just went jogging with my thoughts.  It’s been a huge boost to the amount of creative thinking I do in a week.
  • I cannot rave enough about payroll service, Gusto.  If you need to run payrolls for your business, these guys make it seriously easy, even paying taxes for your automatically.  Before I switched, I’d been using Intuit’s online payroll, which was the user experience equivalent of a grizzly bear carrying a raccoon in a backpack and both of them are mauling you.  Gusto restored my faith in humanity.
  •  Every now and then, I get nostalgic for computer games from my childhood.  When I do, abandonia usually has me covered.

The Post Digest

And now, the post digest.

  • I write a lot of posts about static analysis, since it’s something of a specialty of mine.  Here’s another primer I did about it for TechTown training.  In it, I evangelized a bit, encouraging readers to look past the really dull name and see that underneath it lies a cool concept.
  • Speaking of static analysis, I wrote a post for NDepend entitled “Code Quality Metrics: Separating the Signal from the Noise.”  There’s a lot of reductionist code metrics out there, so I did my part to add some nuance to the world.
  • For SubMain, I wrote a post taking you through different documentation tool options that programmers have.  User manuals, release notes, and all the traditional stuff, but then also new approaches that generate documentation automatically, at least for a starting point.
  • In another NDepend post, I talk about a novel way to settle the inevitable squabbles among a development team.  Make your arguments, and then prove them visually, using automated tooling to paint pictures of your codebase.  My personal favorite for proving a point has always been the dependency death star.
  • And, finally, I wrote a post for Monitis trying to get a little more specific around the generalized and often hype-y term, “big data.”  This post took a longer view, tracing a history of the concept back to the early days of Java and .NET.

 

By

Software Grows Too Quickly for Manual Review Only

Editorial note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at CodeIt.Right, which can help you with automated code reviews.

How many development shops do you know that complain about having too much time on their hands?  Man, if only we had more to do.  Then we wouldn’t feel bored between completing the perfect design and shipping to production … said no software shop, ever.  Software proliferates far too quickly for that attitude ever to take root.

This happens in all sorts of ways.  Commonly, the business or the market exerts pressure to ship.  When you fall behind, your competitors step in.  Other times, you have the careers and reputations of managers, directors, and executives on the line.  They’ve promised something to someone and they rely on the team to deliver.  Or perhaps the software developers apply this drive and pressure themselves.  They get into a rhythm and want to deliver new features and capabilities at a frantic pace.

Whatever the exact mechanism, software tends to balloon outward at a breakneck pace.  And then quality scrambles to keep up.

Software Grows via Predictable Mechanisms

While the motivation for growth may remain nebulous, the mechanisms for that growth do not.  Let’s take a look at how a codebase accumulates change.  I’ll order these by pace, if you will.

  • Pure maintenance mode, in SDLC parlance.
  • Feature addition to existing products.
  • Major development initiatives going as planned.
  • Crunches (death marches).
  • Copy/paste programming.
  • Code generation.

Of course, you could offer variants on these themes, and they do not have mutual exclusivity.  But nevertheless, the idea remains.  Loosely speaking, you add code sparingly to legacy codebases in support mode.  And then the pace increases until you get so fast that you literally write programs to write your programs.

The Quality Conundrum

Now, think of this in another way.  As you go through the list above, consider what quality control measures tend to look like.  Specifically, they tend to vary inversely with the speed.

Even in a legacy codebase, fixes tend to involve a good bit of testing for fear of breaking production customers.  We treat things in production carefully.  But during major or greenfield projects, we might let that slip a little, in the throes of productivity.  Don’t worry — we’ll totally do it later.

But during a death march?  Pff.  Forget it.  When you slog along like that, tons of defects in production qualifies as a good problem to have.  Hey, you’re in production!

And it gets even worse with the last two items on my bulleted list.  I’ve observed that the sorts of shops and devs that value copy/paste programming don’t tend to worry a lot about verification and quality.  Does it compile?  Ship it.  And by the time you get to code generation, the problem becomes simply daunting.  You’ll assume that the tool knows what it’s doing and move on to other things.

As we go faster, we tend to spare fewer thoughts for quality.  Usually this happens because of time pressure.  So ironically, when software grows the fastest, we tend to check it the least.

Read More

By

How to Evaluate Your Static Analysis Process

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download a trial of NDepend and take a look at its static analysis capabilities.

I often get inquiries from clients and prospects about setting up and operationalizing static analysis.  This makes sense.  After all, we live in a world short on time and with software developers in great demand.  These clients always seem to have more to do than bandwidth allows.  And static analysis effectively automates subtle but important considerations in software development.

Specifically, it automates peer review to a certain extent.  The static analyzer acts as a non-judging, mute reviewer of sorts.  It also stands in for a tiny bit of QA’s job, calling attention to possible issues before they leave the team’s environment.  And, finally, it helps you out by acting as architect.  Team members can learn from the tool’s guidance.

So, as I’ve said, receiving setup inquiries doesn’t surprise me.  And I applaud these clients for pursuing this path of improvement.

What does surprise me, however, is how few organizations seem to ask another, related question.  They rarely ask for feedback about the efficacy of their currently implemented process.  Many organizations seem to consider static analysis implementation a checkbox kind of activity.  Have you done it?  Check.  Good.

So today, I’ll talk about checking in on an existing static analysis implementation.  How should you evaluate your static analysis process?

Read More

By

CodeIt.Right Rules Explained, Part 6

Editorial note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, follow the tag to have a look at the rest of the series.

Let’s do another installment of the CodeIt.Right Rules Explained series.  Today we have post number six in that series, with three more rules.  As always, I’ll start off with my two personal rules about static analysis guidance, along with an explanation for them.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

It may seem as though I play rhetorical games here.  After all, I could just say, “learn the reasoning behind all suggested fixes.”  But I want to underscore the decision you face when confronted with static analysis feedback.  In all cases, you must actively choose to ignore the feedback or address it.  And for both options, you need to understand the logic behind the suggestions.

In that spirit, I’ll offer up explanations for our three rules without further ado.

Read More

By

CodeIt.Right Rules, Explained Part 5

Editorial note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  You can also read a lot more, both on that blog and in their documentation, about CodeIt.Right’s analysis rules.

Today, I’ll do another installment of the CodeIt.Right Rules, Explained series.  This is post number five in the series.  And, as always, I’ll start off by citing my two personal rules about static analysis guidance, along with the explanation for them.

  • Never implement a suggested fix without knowing what makes it a fix.
  • Never ignore a suggested fix without understanding what makes it a fix.

It may seem as though I’m playing rhetorical games here.  After all, I could simply say, “learn the reasoning behind all suggested fixes.”  But I want to underscore the decision you face when confronted with static analysis feedback.  In all cases, you must actively choose to ignore the feedback or address it.  And for both options, you need to understand the logic behind the suggestion.

In that spirit, I’m going to offer up explanations for three more CodeIt.Right rules today.

Read More