DaedTech

Stories about Software

By

What Metrics Should the CIO See?

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, give NDepend a try — download it and see if your code falls in the dreaded Zone of Pain.

I’ve worked in the programming industry long enough to remember a less refined time.  During this time, the CIO (or CFO, since IT used to report to the CFO in many orgs) may have counted lines of code to measure the productivity of the development team.  Even then, they probably understood the folly of such an approach.  But, if they lacked better measures, they might use that one.

Today, you rarely, if ever see that happen any longer.  But don’t take that to mean reductionist measures have stopped.  Rather, they have just evolved.

Most commonly today, I see this crop up in the form of automated unit test coverage.  A CIO or high level manager becomes aware of generally quality and cadence problems with the software.  She may consult with someone or read a study and conclude that a robust, automated test suite will cure what ails her.  She then announces the initiative and rolls out.  Then, she does the logical thing and instruments her team’s process so that she can track progress and improvement with the testing initiative.

The problem with this arises from what, specifically, the group measures and improves.  She wants to improve quality and predictability, so she implements a proxy solution.  She then measures people against that proxy.  And, often, they improve… against that proxy.

If you measure your organization’s test coverage and hold them accountable, you can rest assured that they will improve test coverage.  Improved quality, however, remains largely an orthogonal concern.

The CIO’s Leaky Abstraction

The issue here stems from what I might call a leaky organizational abstraction.  If the CIO came from a software development background, this gets even more thorny.

Consider that a CIO or high level manager generally concerns himself with organizational strategy.  He approves and monitors budgets, signs off on major initiatives, decides on the fate of applications in the application portfolio, etc.  The CIO, in other words, makes business decisions that have a technical flavor.  He deals in profits, losses, revenues, expenses, and organizational politics.

Through that lens, he might look at quality problems across the board as hits to the company’s reputation or drags on the bottom line.  “We’re losing subscribers due to these bugs that happen at each roll out.  We estimate that we lose $10,000 more each month in revenue.”  He would then pull the trigger on business solutions: hiring consultants to fix this problem, realigning his org chart, putting off milestones to focus on quality, etc.

But if he dives into the weeds, he’s shedding a business person’s hat for a techie’s.  “Move over architects,” he says, “I know how you can fix this at the line level.  I call it ‘automated test coverage’ and I order you to start doing it.”  In a traditionally organized corporate structure, the CIO begins doing the job of folks in his organization at his peril.

Read More

By

The Relationship Between Team Size and Code Quality

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, take a look at NDepend and see how your code stacks up.

Over the last few years, I’ve had occasion to observe lots of software teams.  These teams come in all shapes and sizes, as the saying goes.  And, not surprisingly, they produce output that covers the entire spectrum of software quality.

It would hardly make headline news to cite team members’ collective skill level and training as a prominent factor in determining quality level.  But what else affects it?  Does team size?  Recently, I found myself pondering this during a bit of downtime ahead of a meeting.

Does some team size optimize for quality?  If so, how many people belong on a team?  This line of thinking led me to consider my own experience for examples.

A Case Study in Large Team Dysfunction

Years and years ago, I spent some time with a large team.  For its methodology, this shop unambiguously chose waterfall, though I imagine that, like many such shops, they called it “SDLC” or something like that.  But any way you slice it, requirements and design documents preceded the implementation phase.

In addition to big up front planning, the codebase had little in the way of meaningful abstractions to partition it architecturally.  As a result, you had the perfect incubator for a massive team.  The big up front planning ensured the illusion of appropriate resource allocation.  And then the tangled code base ensured that “illusion” was the best way to describe the notion that people could be assigned tasks where they didn’t severely impact one another much.

On top of all that, the entire software organization had a code review mandate for compliance purposes.  This meant that someone needed to review each line of code committed.  And, absent a better scheme, this generally meant that the longest tenured team members did the reviewing.  The same longest tenured team members that had created an architecture with no meaningful partitioning abstractions.

This cauldron of circumstances boiled up a mess.  Team members bickered over minutiae as code sprawled, rotted, and tangled.  Based solely on this experience, less is more.

Read More

By

How Much Code Should My Developers Be Responsible For?

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and see if your code manages to avoid the dreaded zone of pain.

As I work with more and more organizations, my compiled list of interesting questions grows.  Seriously – I have quite the backlog.  And I don’t mean interesting in the pejorative sense.  You know – the way you say, “oh, that’s… interesting” after some drunken family member rants about their political views.

Rather, these questions interest me at a philosophical level.  They make me wonder about things I never might have pondered.  Today, I’ll pull one out and dust it off.  A client asked me this once, a while back.  They were wondering, “how much code should my developers be responsible for?”

Why ask about this?  Well, they had a laudable enough goal.  They had a fairly hefty legacy codebase and didn’t want to overtax the folks working on it.  “We know our codebase has X lines of code, so how many developers comprise an ideally staffed team?”

In a data-driven way, they asked a great question.  And yet, the reasoning falls apart on closer inspection.  I’ll speak today about why that happens.  Here are some problems with this thinking.

Read More

By

Alternatives to Lines of Code

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and give it a try — see if your code lies in the Zone of Pain.

It amazes me that, in 2016, I still hear the occasional story of some software team manager measuring developer productivity by committed lines of code (LOC) per day.  In fact, this reminds me of hearing about measles outbreaks.  That this still takes place shocks and creates an intense sense of anachronism.

I don’t have an original source, but Bill Gates is reputed to have offered pithy insight on this topic.  “Measuring programming progress by lines of code is like measuring aircraft building progress by weight.”  This cuts right to the point that “more and faster” does not equal “fit for purpose.”  You can write an awful lot of code without any of it proving useful.

Before heading too far down the management criticism rabbit hole, let’s pull back a bit.  Let’s take a look at why LOC represents such an attractive nuisance for management.

For many managers, years have passed since their days of slinging code (if those days ever existed in the first place).  So this puts them in the unenviable position of managing something relatively opaque to them.  And opacity runs afoul of the standard management playbook, wherein they take responsibility for evaluating performances, forecasting, and establishing metric-based incentives.

The Attraction of Lines of Code

Let’s consider a study in contrasts.  Imagine that you took a job managing a team of ditch diggers.  Each day you could stand there with your clipboard, evaluating visible progress and performance.  The diggers that moved the most dirt per hour would represent your superstars, and the ones that tired easily and took many breaks would represent the laggards.  You could forecast milestones by observing yards dug per day and then extrapolating that over the course of days, weeks, and months.  Your reports up to your superiors practically write themselves.

But now let’s change the game a bit.  Imagine that all ditches were dug purely underground and that you had to remain on the surface at all times.  Suddenly accounts of progress, metrics, and performance all come indirectly.  You need to rely on anecdotes from your team about one another to understand performance.  And you only know whether or not you’ve hit a milestone on the day that water either starts draining or stays where it is.

If you found yourself in this position suddenly, wouldn’t you cling to any semblance of measurability as if it were a life preserver?  Even if you knew it was reductionist, wouldn’t you cling?  Even if you knew it might mislead you?  Such is the plight of the dev manager.

In their world of opacity, lines of code represents something concrete and tangible.  It offers the promise of making their job substantially more approachable.  And so in order to take it away, we need to offer them something else instead.

Read More

By

Striking the Standards Balance: Scale Up without the Bureaucracy

Editorial Note: I originally wrote this post for the Telerik blog.  You can check out the original here, at their site.  While you’re there, have a look around at their extensive product offering.

In a whitepaper I wrote recently, I talked about two hypothetical organizations.  I used them to offer a study in hyperbolic contrast.

The first had a team of five developers, none of whom used the same development practices. In fact, one of them used a completely different programming language.  They tracked defects using email, and they operated less as a group and more as a collection of ships passing in the night.  If a customer issue arose in the code of a person on vacation, well, then that customer just had to wait.

On the other side of the aisle sat a massive enterprise.  Here, the team not only used the same programming language, but the same everything.  The organization restricted access to the machines so that developers couldn’t install anything of their choosing.  Instead of leaving things to chance, an architectural center of excellence controlled design decisions.  And any deviation from any practice required forms in triplicate.

I used this hyperbole to draw contrast between teams that could benefit from standardization and teams crippled by it.  Predictably, scale plays an important role in the distinction.  To scale an enterprise, one must standardize some operational concerns.  But in doing so, it risks choking the life out of individual innovation.

How can you standardize while minimizing bureaucracy?  Today, I’d like to offer some tips for striking the balance.

Read More