A Test Coverage Primer for Managers
Managing Blind
Let me see if I can get in your head a little bit. You manage a team of developers and it goes well at times. But, at other times, not so much. Deadlines slide past without a deliverable, weird things happen in production and you sometimes get deja vu when a bug comes in that you could swear the team fixed 3 months ago.
What do you do? It’s a maddening problem because, even though you may once have been technical, you can’t really dive into the code and see what’s going on. Are there systemic problems with the code base, or are you just experiencing normal growing pains? Are the developers in your group painting an accurate picture of what’s going on? Are they all writing good code? Are they writing decent code? Are any of them writing decent code? Can you rely on them to tell you?
As I said, it’s maddening. And it’s hard. It’s like coaching a sports team where you’re not allowed to watch the game. All you can do is rely on what the players tell you is going on in the game.
A Light in the Darkness
And then, you light upon a piece of salvation: automated unit tests. They’re perfect because, as you’ll learn from modern boilerplate, they’ll help you guard against regressions, prevent field defects, keep your code clean and modular, and plenty more. You’ve got to get your team to start writing tests and start writing them now.
But you weren’t born yesterday. Just writing tests isn’t sufficient. The tests have to be good. And, so you light upon another piece of salvation: measuring code coverage. This way, not only do you know that developers are writing tests, but you know that you’re covered. Literally. If 95% of your code base is covered, that probably means that you’re, like, 95% good, right? Okay, you realize it’s maybe not quite that simple, but, still, it’s a nice, comforting figure.
Conversely, if you’re down at, say, 20% coverage, that’s an alarming figure. That means that 80% of your code is not covered. It’s the Wild West, and who knows what’s going on out there? Right? So the answer becomes clear. Task a team member with instrumenting your build to measure automated test coverage and then dump it to a readout somewhere that you can monitor.
Time to call the code quality issue solved (or at least addressed), and move on to other matters. Er, right?
I originally wrote this post for the NDepend blog. Click here to read the rest.