DaedTech

Stories about Software

By

How Code Review Saves You Time

Editorial note: I originally wrote this post for the SmartBear blog.  Check out the original here, at their site.  While you’re there, take a look around at their offering.

Physical labor is one of the most strangely enduring mental models for knowledge work.  Directors and managers of software development the world over reactively employ it when nudged out of their comfort zones, for instance.  “What do you mean ‘pair programming’ — we’ll get half of the work done for the same payout in salary!”  And that’d be a reasonable argument if the value of software were measured in “characters typed per minute.”

WorkHarder

Most of the skepticism of activities like unit testing and code review originates from this same “knowledge work as labor” confusion.  The core value of software resides in the verbatim contents of the source code files, so stuffing all of the features in them ahead of the deadline is critical.  Testing and reviewing are “nice-to-haves”, time and budget permitting.

The classic response from people arguing for these practices is thus one of worth.  It goes something like this: “sure, you can cut those activities, but your quality will dip and there will be more escaped defects.”  In other words, these ‘extra’ activities pay for themselves by making your outfit look less amateurish.  That argument often works, but not always.  Some decision-makers, backs truly to the wall, say “I don’t care about quality — my job depends on shipping on June 19th, and we’re GOING to ship on June 19th, whether it’s a finished product or whether it’s a bag of broken kazoos and cyanide with a bow on it.”

I’d like to take a different tack today and fight a time argument with time arguments.  Instead of “sure, code reviews take extra time but they’re worth it,” I’d like to explore ways that they actually save time.  Whether they save more time than take is going to vary widely by situation, so please don’t mistake my intent; I’m not looking to argue that they’ll save you net time, but rather that they are not exclusively an investment of time.

Read More

By

My Realizations about Software Consulting

I consume a lot of audio books.  Most recently, this habit led me to listen to a book by Allen Weiss, called Million Dollar Consulting.  The title yields the book’s premise in deceptively simple fashion: a guide to building a seven-figure-per-year solo consulting practice.  Sound crazy?  Two and a half years ago, when I went into business for myself full time, I would have thought so.  Now, it sounds pretty doable to me, if that’s your thing.

BigPileOfMoney

This isn’t to say that have a million plus dollar per year consulting practice — just that I understand how someone could achieve what he’s talking about in a way that I couldn’t have back then.  Listening to this book gave me cause to reflect on my free agent journey, so I thought I’d write about that today.  (I know there are some who’ve wanted more of these posts anyway)

When I first took the free agent plunge, I had a fairly vivid picture of how it would work.  I was leaving a job running an IT department, and what I sought was a practice where I helped solve targeted technical problems for a portfolio of clients, rather than solving all sorts of organizational problems for a single entity.  I wanted both to diversify and to become more project-focused.

The Neophyte Techie Free Agent

Beyond that, I didn’t really have a firm grasp of the path to growing profit.  At the time, I assumed that technical consultants did what members of app dev groups did, but for much higher pay (due to transience and achieving better results).  That is, I might do a mixture of application development, architectural consulting, training, mentoring, troubleshooting, etc.

I’d start out charging, say, $100 per hour and then let supply and demand drive up my rates as I pleased more and more clients.  This, I reasoned, was the path to bill rates exceeding $250 per hour.  And, why not?  That seems to be how so-called app dev consultancies work, offering blended rates and billing out their principals and super-awesome-folks at $250/hour.

At the time, I remember chatting with John Sonmez of Simple Programmer.  He and I knew each other through the blogging community and through Pluralsight.  He’d made a similar career play a year or two earlier than I had, so I picked his brain about his journey.  He told me something quite memorable, in that it proved prescient, but was inconceivable to me at the time.  “I want to get away from trading hours for dollars.”  Huh.

Read More

By

How to Actually Reduce Software Defects

Editorial Note: I originally wrote this post for the SmartBear blog.  You can check out the original here, at their site.  Have a look around while you’re there and see what some of the other authors have written.

As an IT management consultant, probably the most frequent question I hear is some variant of “how can we get our defect count down?” Developers may want this as a matter of professional pride, but it’s the managers and project managers that truly burn to improve on this metric. Our software does thousands of undesirable things in production, and we’d like to get that down to hundreds.

Fumigation

Almost invariably, they’re looking for a percentage reduction, presumably because there is some sort of performance incentive based on the defect count metric. And so they want strategies for reducing defects by some percentage, in the same way that the president of the United States might challenge his cabinet to trim 2% of the unemployment percentage in the coming years. The trouble is, though, that this attitude toward defects is actually part of the problem.

The Right Attitude toward Defects

The president sets a goal of reducing unemployment, but not of eliminating it. Why is that? Well, because having nobody in the country unemployed is simply impossible outside of a planned economy – people will quit and take time off between jobs or get laid off and have to spend time searching for new ones. Some unemployment is inevitable.

Management, particularly in traditional, ‘waterfall’ shops, tends to view defects in the same light. We clearly can’t avoid defects, but if we worked really hard, we could reduce them by half. This attitude is a core part of the problem.

It’s often met with initial skepticism, but what I tell these clients is that they should shoot for having no escaped defects (defects that make it to production, as opposed to ones that are caught by the team during testing). In other words, don’t shoot for a 20% or 50% reduction – shoot for not having defects.

It’s not that shooting for 100% will stretch teams further than shooting for 20% or 50%. There’s no psychological gimmickry to it. Instead, it’s about ceasing to view defects as “just part of writing software.” Defects are not inevitable, and coming to view them as preventable mistakes rather than facts of life is important because it leads to a reaction of “oh, wow, a defect – that’s bad, let’s figure out how that happened and fix it” instead of a reaction of “yeah, defects, what are you gonna do?”

When teams realize and accept this, they turn an important corner on the road to defect reduction.

Read More

By

The Power of CQLinq for Developers

Editorial Note: I originally wrote this post for the NDepend blog. Check out the original here, at their site.  While you’re there, have a look around at some of the other posts and subscribe to the RSS feed if you’d like a weekly post about static analysis.  

I can still remember my reaction to Linq when I was first exposed to it.  And I mean my very first reaction.  You’d think, as a connoisseur of the programming profession, it would have been, “wow, groundbreaking!”  But, really, it was, “wait, what?  Why?!”  I couldn’t fathom why we’d want to merge SQL queries with application languages.

Up until that point, a little after .NET 3.5 shipped, I’d done most of my programming in PHP, C++ and Java (and, if I’m being totally honest, a good bit of VB6 and VBA that I could never seem to escape).  I was new to C#, and, at that time, it didn’t seem much different than Java.  And, in all of these languages, there was a nice, established pattern.  Application languages were where you wrote loops and business logic and such, and parameterized SQL strings were where you defined how you’d query the database.  I’d just gotten to the point where ORMs were second nature.  And now, here was something weird.

But, I would quickly realize, here was something powerful.

Nascar

The object oriented languages that I mentioned (and whatever PHP is) are imperative languages.  This means that you’re giving the compiler/interpreter a step by step series of instructions on how to do something.  “For an integer i, start at zero, increment by one, continue if less than 10, and for each integer…”   SQL, on the other hand, is a declarative language.  You describe what you want, and let something else (e.g. the RDBMS server) sort out the details.  “I want all of the customer records where the customer’s city is ‘Chicago’ and the customer is less than 40 years old — you figure out how to do that and just give me the results.”

And now, all of a sudden, an object oriented language could be declarative.  I didn’t have to write loop boilerplate anymore!

Read More

By

Static Analysis and The Other Kind of False Positives

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at the NDepend site.  If you’re a fan of static analysis tools, do yourself a favor and take a look at the NDepend offering while you’re there.

A common complaint and source of resistance to the adoption of static analysis is the idea of false positives.  And I can understand this.  It requires only one case of running a tool on your codebase and seeing 27,834 warnings to color your view on such things forever.

There are any number of ways to react to such a state of affairs, though there are two that I commonly see.  These are as follows.

  1. Sheepish, rueful acknowledgement: “yeah, we’re pretty hopeless…”
  2. Defensive, indignant resistance: “look, we’re not perfect, but any tool that says our code is this bad is nitpicking to an insane degree.”

In either case, the idea of false positives carries the day.  With the first reaction, the team assumes that the tool’s results are, by and large, too advanced to be of interest.  In the second case, the team assumes that the results are too fussy.  In both of these, and in the case of other varying reactions as well, the tool is providing more information than the team wants or can use at the moment.  “False positive” becomes less a matter of “the tool detected what it calls a violation but the tool is wrong” and more a matter of “the tool accurately detected a violation, but I don’t care right now.”  (Of course, the former can happen as well, but the latter seems more frequently to serve as a barrier to adoption and what I’m interested in discussing today).

Is this a reason to skip the tool altogether?  Of course not.  And, when put that way, I doubt many people would argue that it is.  But that doesn’t stop people form hesitating or procrastinating when it comes to adoption.  After all, no one is thinking, “I want to add 27,834 things to the team’s to do list.”  Nor should they — that’s clearly not a good outcome.

Fumigation

With that in mind, let’s take a look at some common sources of false positives and the ways to address them.  How can you ratchet up the signal to noise ratio of a static analysis tool so that is valuable, rather than daunting?

Read More