DaedTech

Stories about Software

By

What a Software Audit Means for You

Editorial Note: I originally wrote this post for the SmartBear blog.  Check out the original, here, at their site.  While you’re there, have a look around at some of the other authors’ posts.

“We’re being audited.”

Now there’s a sentence to strike fear into anyone’s heart. The most famous and iconic example of an audit is the dreaded IRS audit. This audit is the IRS’s way of saying, “you did something wrong during the course of an extremely byzantine process we force on you, and now we’re going to make your life miserable by going through every inch of your life with a fine-toothed comb.” Or at least, that is the reputation.

irs

Now, while I’m not here to defend the US tax code nor bureaucracy in general, I will say that this is not exactly what the IRS is saying. Rather, they’re saying, “we’ve noticed inconsistencies between what you’ve reported and what we’ve observed elsewhere, and we are going to investigate further.” And, while this investigation does, in fact, mean a lot of unpleasantness for you, that is not the purpose. An audit, per its dictionary definition, is “a methodical examination and review.”

Audits at Your Place of Business

Considered this way, the word loses some of its onerousness, since it is not deliberately punitive. But it’s still not exactly a cause to throw a party – it means that someone is about to come take a very close look at something you’ve done, with exacting rules and detailed expectations. You’re about to be under a microscope.

Having established that an audit is strict and not punitive, what are the implications for you, as a software developer? What does it mean for you code/software to be audited? If your boss or a colleague utters those three fateful words, “we’re being audited,” what does this mean for your organization and for you?

Well, I’m a consultant, so you can probably predict my answer: “it depends.” “We’re being audited” is informative, but it’s not quite enough information. There are, after all, different kinds of audits, conducted for different purposes. Let’s consider a few of them.

Read More

By

Is Your Source Control Usage Conducive to Code Review?

Editorial Note: I originally wrote this post for the SmartBear blog.  Head over to their site and check out the original.  While you’re there, have a look around at posts by some other authors as well.

I can think back to times in my career that the source control that I was using (or not using) made me a cranky, unhappy human being.  Years and years ago, there was the time that a coworker accidentally left all of the files in the codebase checked out through Visual SourceSafe and went on vacation.  I distinctly remember enlisting a sysadmin and the two of us going into the source control server with admin credentials and hacking at settings until we could undo that and I could work.  You see, Visual SourceSafe employed a pessimistic locking strategy by which his checkout meant I couldn’t do anything with the code.

There was also the time, a few years later, when I was suffering through a project that used Rational Clear Case.  On a normal day, delivering code to the official branch or stream or whatever took half an hour.  If I had to work from home, it took all morning.

Angry guy smshing computer

And then there was the time that I was switched onto a project with no source control at all.  The C source code was stored on a production server — a production server that controlled physical machinery in the real world.  To “check things in,” you would modify the C code, turn off the physical machine, load the modified kernel modules, turn the machine on, and then revert real quick if things started blowing up.  I’m not kidding.  This was the commit/rollback strategy when I arrived (I did actually migrate this).

Tools Affect Behavior

These things make for fun war stories, but they also serve to illustrate how source control dictates behavior.  With Visual SourceSafe, we implemented some kind of out of band email protocol to remind people to check in.  With Rational Clear Case, I implemented a homegrown SVN for day to day version control and delivered/integrated only a few times per month.  With the machine server, there was extensive historical commenting in every single source file.  These tools spur you toward behaviors, and, in these cases, toward wasteful or bad behaviors.

For the examples I listed, I was steered toward useless process, steered away from continuous integration, and steered toward neurotic documentation.  But the steering can apply to almost anything, and that includes having a healthy code review process.

There have been studies conducted that demonstrate the importance of code review.  It is uniquely effective when it comes to catching defects earlier than later, and it promotes collective code ownership, thus reducing “bus factor.”  I could go on, but let’s take it as axiomatic in this post that you want to do it.

Does your source control situation make it easy for you to conduct code reviews?  Or does it discourage you, making life tough if you do them, and thus making you less likely to do them.  If it’s the latter, that’s not a good situation.

Read More

By

How to De-Brilliant Your Code

Three weeks, three reader questions.  I daresay that I’m on a roll.  This week’s question asks about what I’ll refer to as “how to de-brilliant your code.”  It was a response to this post, in which I offered the idea of a distinction between maintainable code and common code.  The lead-in premise was that of supposedly “brilliant” code, which was code that was written by an ostensible genius and that no one but said genius could understand.  (My argument was/is that this is usually just bad code written by a self-important expert beginner).

The question is, as follows, verbatim.

In your opinion, what is the best approach to identify que “brilliant” ones with hard code, to later work on turn brilliant to common?

Would be code review the best? Pair programming (seniors could felt challenged…)?

Now, please forgive me if I get this wrong, but because of the use of “que” where an English speaker might say “which”, I’m going to infer that the question submitter speaks Spanish as a first language.  I believe the question is asking after the best way to identify and remediate pockets of ‘brilliant’ code.  But, because of the ambiguity of “ones” it could also be asking about identifying humans that tend to write ‘brilliant’.  Because of this, I’ll just go ahead and address both.

Einstein Thinking Public Static Void

Find the Brilliant Code

First up is identifying brilliant code, which shouldn’t be terribly hard.  You could gather a quorum of your team together and see if there are pockets of code that no one really understands, or else you could remove the anchoring bias of being in a group by having everyone assess the code independently.  In either case, a bunch of “uh, I don’t get it” probably indicates ‘brilliant’ code.  The group aspect of this also serves (probably) to prevent against an individual not understanding simply by virtue of being too much of a language novice (“what’s that ++ after the i variable,” for instance, indicates the problem is with the beholder rather than the original developer).

But, even better, ask people to take turns explaining what methods do.  If people flounder or if they disagree, then they obviously don’t get it, self-reporting notwithstanding.  And having team members not understanding pockets of code is an ipso facto problem.

An interesting side note at this point is whether this illegible code is “brilliant” or “utter spaghetti” is going to depend a lot more on knowledge of who wrote it than anything else.  “Oh, Alice wrote that — it’s probably just too sophisticated for our dull brains.  Oh, wait, you were reading the wrong commit, and it’s actually Bob’s code?  Bob’s an idiot — that’s just bad code.”

De-Brilliant The Codebase

Having identified the target code for de-brillianting, flag it somewhere for refactoring: Jira, TFS, that spreadsheet your team uses, whatever.  Just make a note and queue it as work — don’t just dive in and start mucking around in production code, unless that’s a team norm and you have heavy test coverage.  Absent these things, you’re creating risk without buy in.

Leave these things in the backlog for prioritization on the “eventually” pile, but with one caveat.  If you need to be touching that code for some other reason, employ the boy-scout rule and de-brilliant it, as long as you’re already in there.  First, though, put some characterization tests around it, so that you have a chance to know if you’re breaking anything.  Then, do what you need to and make the code easy to read; when you’re done, the same, “tell me what this does” should be easy to answer for your teammates.

De-brillianting the codebase is something that you’ll have to chip away at over the course of time.

De-Brilliant The Humans

I would include a blurb on how to find the humans, but that should be pretty straightforward — find the brilliant code and look at the commit history.  You might even be able to tell simply from behavior.  People that talk about using 4 design patterns on a feature or cramming 12 statements into a loop condition are prime candidates.

The trick isn’t in finding these folks, but in convincing them to stop it.  And that is both simple to understand and hard to do.

During my undergrad CS major many years ago, I took an intro course in C++.  At one point, we had to do a series of pretty mundane, review exercises that would be graded automatically by a program (easy things like “write a for loop”).  Not exactly the stuff dreams are made of, so some people got creative.  One kid removed literally every piece of white space from his program, and another made some kind of art with indentations.  When people are bored, they seek clever things to do, and the result is ‘brilliant’ code.

The key to de-brillianting thus lies in presenting them with the right challenge, often via constraints or restrictions of some kind.  They do it to themselves otherwise — “I’ll write this feature without using the if keyword anywhere!”

The Right Motivation

Like I said, a simple solution does not necessarily imply an easy solution.  How does one challenge others into writing the kind of straightforward code that is readable and maintainable?

Code review/pairing presents a possible solution.  Given the earlier, “can others articulate what this does” metric, the team can challenge programmer-Einstein to channel that towering intellect toward this purpose.  That may work for some, but other brilliant programmers might not consider that to be a worthwhile or interesting challenge.

In that case, automated feedback through static analysis might do the trick.  FXCop, NDepend, SonarQube, and others can be installed and configured to steer things in the general direction of readability.  Writing code that complies with all warning thresholds of such tools actually presents quite a challenge, since so much of programming is about trade-offs.  Now, a sufficiently determined clever coder could still invent ways to write hard-to-read code, but that would be a much more difficult task when he’d get slapped by the tool for chaining 20 Linq statements onto a single line or whatever.

Of course, probably the best solution is to work with the sort of people who recognize that demonstrating their cleverness takes a backseat to being a professional.  They can do that in their spare time.

If you have a question you’d like to hear my opinion on, please feel free to submit.

No Fields Found.

By

How Collaboration Humanizes the Enterprise

Editorial Note: I originally wrote this post for the SmartBear blog.  You can check out the original, here, at their site.  While you’re there, take a look at their product offering and blog posts by their various authors.

I’ve spent enough time walking the halls of large-ish to massive organizations to have formed some opinions and made some observations. If I had to characterize the motor that drives these beasts, I’d say it is comprised of two main components: intense risk aversion and an obsession with waste elimination that sits somewhere on a spectrum between neurotic and quixotic.

Battleship

Let me explain.

Large organizations all started as small organizations, and all of them survived organizational accretion, besting competitors, dodging bad breaks, and successfully scaling to the point where they have a whole lot to lose. It is this last concern that drives the risk aversion; upstarts are constantly trying to unseat them and rent-seekers trying to sue them because they’re sitting on a pretty nice pot of gold. It is the scaling that motivates the waste elimination. Nothing scales perfectly and few things scale well, so organizations have to become insanely efficient in a number of ways to combat the natural downturns in efficiency caused by scale.

Risk Minimizing and Process Efficiency

With risk minimizing and waste elimination as near-universal goals, organizations tend to do some fairly predictable and recognizable things. Risk elimination takes the form of checks and balances, with pockets of “need to know” being created to isolate problems. For instance, a “security compliance” group may be created to review work product independently from the people producing the work, and it may even go so far as to seek outside certification. This sort of orthogonality and redundancy make it less likely the organization will be sued or compromised.

Unfortunately, though, redundancy exacerbates the other struggle, which is to eliminate waste in the name of efficiency. It’s not exactly efficient to have two separate groups spend time going over the same product and doing the same things, but with slightly altered focus. The organization compensates for this with hyper-specialty and process.

Read More

By

Does Github Enhance the Need for Code Review?

Editorial Note: I originally wrote this post for the SmartBear blog.  You can check out the original here, at their site.  Take a look around while you’re there and check out their products and other posts.

In 1999, a man named Eric S. Raymond published a book called, “The Cathedral and the Bazaar.”  In this book, he introduced a pithy phrase, “given enough eyeballs, all bugs are shallow,” that he named Linus’ Law after Linux creator Linus Torvalds.  Raymond was calling out a dichotomy that existed in the software world of the 1990s, and he was throwing his lot in with the heavy underdog at the time, the bazaar.  That dichotomy still exists today, after a fashion, but Raymond and his bazaar are no longer underdogs.  They are decisive victors, thanks in no small part to a website called Github.  And the only people still duking it out in this battle are those who have yet to look up and realize that it’s over and they have lost.

Cathedral

Cathedrals and Bazaars in the 1990s

Raymond’s cathedral was heavily-planned, jealously-guarded, proprietary software.  In the 1990s, this was virtually synonymous with Microsoft, but certainly included large software companies, relational database vendors, shrink-wrap software makers, and just about anyone doing it for profit.  There was a centrally created architecture and it was executed in top down fashion by all of the developer cogs in the for-profit machine.  The software would ship maybe every year, and in the run up to that time, the comparably few developers with access to the source code would hunt down as many bugs as they could ahead of shipping.  Users would then find the rest, and they’d wait until the next yearly release to see fixes (or, maybe, they’d see a patch after some months).  The name, “cathedral” refers to the irreducible nature of a medieval cathedral — everything is intricately crafted in all or nothing fashion before the public is admitted.

The bazaar, on the other hand, was open source software, represented largely at the time by Linux and Apache.  The source code for these projects was, obviously, free to all to look at and modify over the nascent internet.  Releases there happened frequently and the work was crowd-sourced as much as possible.  When bugs were found following a release, the users could and did hunt them down, fix them, and push the fix back to the main branch of the source code very quickly.  The cycle time between discovery and correction was much, much smaller.  This model was called the bazaar because of the comparably bustling, freewheeling nature of the cooperation; it resembled a loud, spontaneously organized marketplace that was surprisingly effective for regulating commerce.

Read More