DaedTech

Stories about Software

By

Starting to Unit Test: Not as Hard as You’d Think

I happened to read a post by Dror Helper the other day, in which he said:

I believe TDD and “unit tests” has been done a great injustice by not giving it a cooler name – preferable one that doesn’t have the word “test” in it – because it’s a PR disaster!

Wow, that had never occurred to me, and yet he’s spot on. “Unit test” is a wretched name for anything. It seems somehow to combine all the worst elements of propagating uncertainty in arithmetic with lab measurements and double checking all of your answers on a scantron exam. I just bored myself half to death typing that sentence, so it’s little wonder that the concept of a “unit test” is often met with a visceral “ugh.” I mean, we could at the least call the test suite a “verification checklist” or something, implying that you’re marking progress as you complete things. It’s not exactly a skydiving trip, but it has to beat “unit test.” But I digress.

The lack of appeal of the name, the feeling of already being pressed for time without taking something else on, and the natural resistance to the unknown all create barriers to entry when it comes to unit testing. In order to get started yourself, or especially to convince those around you to do the same, it’s necessary to overcome those barriers. I have a good bit of experience with this in a variety of environments and from a variety of roles. Over the last year, I did a string of blog posts that were essentially my talking scripts for a series of power point/demo talks I gave. These were meant to be an introduction to unit testing.

The series actually became pretty long, and as I was finishing it, I decided to put it into E-Book format. So, with the help of my editor, we turned it into fluid book, and with the help my publisher, we published it in all major E-Book formats for a cost of $4.99. Here is the book on the publisher’s site, and here is a link directly to Amazon
for Kindle readers (full disclosure: I’m experimenting with affiliate links, and that’s why I’m specifically linking Amazon directly).

The title of the book is “Starting to Unit Test: Not as Hard as You Think,” and I feel that the title really captures it. My goal was to trade practice purity for reducing the barriers to entry. In other words, the message was, “don’t feel like you have to start full bore with 100% coverage, TDD, and anything less is a failure — if you just introduce a few tests into a few places in your code, consider that a win.” I also did something else that I haven’t seen others do as much, which is to explain that some types of frameworks and code present unit testing nightmares, and that newbie unit testers should avoid them until they reach a higher belt.” What I’d like to see people take away from this book is a feeling of satisfaction from experiencing a sequence of small but real wins.

For you mythology buffs, this is Sysiphus actually making it to the top of the hill.

For you mythology buffs, this is Sysiphus actually making it to the top of the hill.

By and large, you could get this content by reading through my blog posts on the subject, but if you want it in a compact format, here it is. Not to mention, if you’re trying to sell your team on the merits of unit testing, a book is probably going to have more cachet than a series of blog posts, and the one link to the book is going to be easier to distribute than the giant collection of links for the individual posts/chapters. So get it if you’re interested, and encourage your team to get it if you’re trying to introduce the concept, especially since I close out the book by making a business case for unit testing (this making cases for best practices is actually going to be a theme of mine in the future).

Enjoy, and thanks for reading (the blog and, hopefully, the book)!

By

Intro to Unit Testing 10: The Business Value of Unit Tests

Backstory

I worked for a consulting firm for a while. We didn’t make anything particularly exciting — line of business applications and the like was about the extent of it. The billing model for clients was dead simple and resembled the way that lawyers charge; consultants had an hourly rate and we kept diligent track of our time, to the nearest quarter hour. There was a certain feel-good element to this oversimplification of knowledge work in the same way that it’s pleasant to lean back, watch Superman defeat Lex Luthor and delight in a PG world where Good v Evil grudge matches always end with Good coming out the victor.

It’s pleasant to think that writing software has the predictable, low-thought cadence of an activity like chopping wood where each 15 minutes spent produces a fairly constant amount of value to the recipient of the labor. (Cue background song, Lou Reed, “A Perfect Day“) Chop for 15 minutes, collect $3, hand over X chopped logs. Chop for 1 hour, collect $12, hand over 4X chopped logs. Write software for 15 minutes, produce a working, 15 LOC application for $25. Write software for 1 hour, produce a working, 60 LOC application for $100. Oh, such a perfect day.

When I started at the company, I asked some people if they wrote unit tests. The answer was generally ‘no’ and the justification for this was that you’d have to run it by the client and the client most likely wouldn’t want to pay for you to write unit tests. What they meant by this was that since we billed in quarter hour increments and supplied invoices with detailed logs of all activity, it’d be sort of hard to sneak in 15 minutes of writing automated test code. Presumably, the fear was that the client would say, “what’s this ‘unit testing’ stuff and why did you do it when you didn’t say anything about it.” I say “presumably” because this wasn’t the reason people didn’t unit test at this company just like whatever excuse they have at your company isn’t the real reason for not unit testing there. The real reason is usually not knowing how to do it.

Why did I start out with this anecdote and its centerpiece of the quarter hour billing and development cadence? Well, simply because software development is a creative exercise and far too spastic to flow along smoothly in a low viscosity stream of lines of code per minute. You may sit and stare blankly at a computer screen while contemplating design for half an hour, code for 4 minutes, stare blankly again for an hour, code for 20 minutes, and then finish the product. So, 24 minutes — is that a billable half hour or 15 minutes? Closer to 30, I suppose. Do you count the blank staring? On the one hand, this was the real work — the knowledge work — in a way that the typing certainly wasn’t. So should you bill 1.5 hours instead and just count the typing as a brainless exercise? Or should you bill 2 hours because the work is a gestalt? I personally think that the answer is obvious and the gestalt billing model cuts right to the notion that software development is a holistic exercise that involves delivering a working product, and the breakdown may include typing, thinking, white-boarding, searching Stack Overflow, debugging, squinting at a GUI, talking to another developer, going for a 5 minute walk for perspective, running a static analysis tool, tracking down a compiler warning, copying 422 files to a target directory, and yes, my friend, unit testing.

Those are all things that you do as part of writing good software. And, in a consulting paradigm, you wouldn’t cut one of them out and say, “the client wouldn’t want to pay for that,” because the client doesn’t know what its talking about when you’re under the software-writing hood — that’s why they’re paying you. They wouldn’t want to pay for “searching Stack Overflow” or “Squinting at the GUI” either, but you don’t refuse to do those things when you’re writing software. And so refusing to unit test for this same reason is a cop-out. When a younger developer at that firm asked me why I wrote unit tests and how I accounted for them in my billing, this was essentially the argument I gave — I asked him how he accounted for the time he spent compiling, debugging, and running the application and, bright guy that he is, he understood what I was saying immediately.

Core Business Value

This may seem like a roundabout and long introduction to this chapter, but it really cuts to the core of the business value proposition for unit tests. During development, why do developers compile, run, and debug? Well, they do it to see if their code is doing what they think it should do. Write some code, then make sure it’s doing what you expect. So why write unit tests? To make sure your code is doing what you expect, and to make sure it keeps doing what you expect via automation. The core business value of unit tests is that they serve as progress markers, sign-posts, and guard rails on the road to an application that does what you expect.

Unit tested applications are more predictable and better documented than their non-unit tested counter parts (assuming the same amount of API documentation and commenting are done), and there is an enormous amount of business value in predictability and clarity of intent. With a good unit test suite, you’ll know in minutes if you’ve introduced a regression bug. Without that unit test suite, when will you know? When you run the application GUI? When QA runs it a week later? When the customer runs it a month later? When something randomly goes haywire a year later? Each one of these delays becomes exponentially more expensive.

That’s not the only value-add from a business perspective (and I’ll list some other ones next), but it’s the main one, as far as I’m concerned. It also explains why the notion that you need to carve out some extra time for unit tests and figure out whether the customer wants them or not is preposterous. Do you think the customer is going to get angry if you explain that part of your development process is to execute the code you just wrote to make sure it doesn’t crash? If the answer to that is “no, of course not, that’s ridiculous” then you also have the answer to whether or not a customer would care if you happened to automate that process.

Of course, one thing to bear in mind is that a customer may not want to pay for you to learn on the job to unit test, and that’s a fair point. But if the customer (or your company, internally, if you aren’t a consultant) doesn’t want to foot the bill for this, then you should strongly consider picking it up on your own and then switching customer/company if they don’t buy in to something as fundamental as automating predictability. Unit tests are the software equivalent of accountants practicing double entry bookkeeping, doctors washing their hands, electricians turning power back on before leaving and plumbers doing the same with the water. Imagine if your plumber sweat welded a joint for your new shower, sized it up and then said, “meh, I’m sure it’s fine” and left without ever running the water. That’s what tens of thousands of us do every day when we just assume some piece of code works because it worked a month ago and you don’t remember touching it since then. Ship it? Meh, sure, whatever — it’s probably fine. The business value of unit tests is a stronger assurance that we know what’s going on than “meh, sure, whatever.”

Ancillary Business Value

Here are some other ways in which unit tests add value to the business beyond confirming that the application behaves as expected.

First, unit tests tend to serve nicely as documentation. This may sound strange at first; you’re probably thinking, “how is a bunch of code documentation when we have a whole activity associated with documenting our code?” Well, fact of the matter is that documentation in the form of writeups, code comments, instruction manuals, etc tends to get out of date as the product ages. Unit tests, however, are never out of date because if they were, the build would fail (or at least you’d see red when you ran them) and you’d be forced to go back and “fix the documentation.” If you keep your unit test methods clean and give them good names as described in earlier chapters of this series, they’ll also read more like a book than like code, and they’ll document purpose and intended behavior of the system.

Unit tests also guard against regression. When you write the tests, you’re confirming that the software does what you expect it to at that moment. But what about later? Maybe later you forget what you intended in that moment or decide that you intended something different and you change the code. Will it still work? In a lot of legacy code bases, the answer to that question is, “yikes — who knows?” With a thoughtfully unit tested code base, you can rig it so that a test goes red if a design assumption that you made is no longer true. For instance, say you write some method with the intention that it never return null, and say that eventually you and your teammates build on this method and its assumed post-condition, grabbing values returned by the method and never checking for null before dereferencing them. If someone later modifies that method and adds a condition in which it returns null, the only thing standing between them and introducing a regression bug is a unit test that fails if that method returns null.

The practice of automated unit testing has after the fact benefits such as documentation and guards against regression bugs, but it also helps during the course of development by having a positive impact on your design. I’ve long been a fan of and have previously linked to this excellent talk by Michael Feathers called “The Deep Synergy Between Testability and Good Design.” The general idea is that writing code with the knowledge that you’re going to be writing tests for it (or practicing TDD) leads you to write small, factored classes and methods that are loosely coupled and that this practice, in turn, creates flexible and maintainable code. Or, consider the converse and think of how hard it is to write unit tests for giant, procedural methods and classes. Unit tests make it harder to do things that make your code awful.

Lastly, I’ll throw in a benefit that summarizes my take on this entire subject and really drives things home. It’s a huge bit of editorializing, but I feel somewhat entitled to do so in my own conclusion. I believe that a serious piece of value added by unit testing is that it lends you or your group legitimacy and credibility. In this day and age, the question, “should you unit test your code” is basically considered to be settled case law in the industry. So the question, to a large extent, boils down to whether you write tests or whether you have excuses, legitimate or otherwise (and there are legitimate ones, such as “I don’t know how, yet.”) Don’t be in the camp that has excuses.

Forget justifying what you or your organization has done up to this point, and imagine yourself as a customer of software development. You’ve got a budget, and you’re looking to have some software written that you don’t have time, yourself, to write. All other things being equal, which group do you hire? Do you hire a group that responds to “do you unit test” with “no, we don’t think our customers would want that?” How about a group that responds with “well, there’s this database and this GUI and sometimes there’s hardware, so we really can’t?” Or do you hire a group that responds with, “we sure do, would you like to see some samples?” I bet it’s the last one, if you’re honest with yourself.

So be that last group. Add value to your users and your business. Write good software and consider your design carefully, and, just as importantly, automate the process of ensuring your software does what you think it does. Your credibility and the credibility of your software is at stake.

By

Intro to Unit Testing 9: Tips and Tricks

This, I’ve decided, is going to be the penultimate post in this series. I had originally targeted a 9 part series, finishing up with a “tips and tricks” post, the way you often see with technical books. But I think I’m going to add a 10th post to the series where I make the business case for unit testing. That seems like a more compelling wrap up to the series. So, stay tuned for one more after this.

This post is going to be a collection of tips and tricks for new and veteran unit testers.

Structure the test names for readability beyond just descriptive names.

The first step in making your unit tests true representation of business statements is to give them nice, descriptive names. Some people do this with “given, when, then” verbiage and others are simply verbose and descriptive. I like mine to read like conversational statements, and this creates almost comically long names like “GetCustomerAddress_Returns_Null_When_Customer_Is_Partially_Registered.” I’ve been given flack for things like this during my career and you might get some as well, but, do you know what? No one is ever going to ask you what this test is checking. And if it fails, no one is ever going to say “I wonder why the unit test suite is failing.” They’re going to know that it’s failing because GetCustomerAddress is returning something other than null for a partially registered customer.

When you’re writing unit tests, it’s easy to gloss over names of the tests the way you might with methods in your production code. And, while I would never advocate glossing over naming anywhere, you especially don’t want to do it with unit tests because unit test names are going to wind up in a report generated by your build machine or your IDE where as production methods won’t unless you’re using a static analysis tool with reporting, like NDepend.

But it goes beyond simply giving tests descriptive names. Come up with a good scheme for having your tests be as readable as possible in these reports. This is a scheme from Phil Haack that makes a lot of sense.
I adopted a variant of it after reading the post, and have been using it to eliminate duplication in the names of my unit tests. This consideration of where the test names will be read, how, and by whom is important. I’m not being more specific here simply because how you do this exactly will depend on your language, IDE, testing framework, build technology etc. But the message is the same regardless: make sure that you name your tests in such a way to maximize the value for those who are reading reports of the test names and results.

Create an instance field or property called “Target”

This one took a while to grow on me, but it eventually did and it did big time. Take a look at the code below, originally from a series I did on TDD:

[TestClass]
public class BowlingTest
{
	[TestClass]
	public class Constructor
	{

		[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
		public void Initializes_Score_To_Zero()
		{
			var scoreCalculator = new BowlingScoreCalculator();

			Assert.AreEqual(0, scoreCalculator.Score);
		}
	}

	[TestClass]
	public class BowlFrame
	{
		private static BowlingScoreCalculator Target { get; set; }

		[TestInitialize()]
		public void BeforeEachTest()
		{
			Target = new BowlingScoreCalculator();
		}

		[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
		public void With_Throws_0_And_1_Results_In_Score_1()
		{
			var frame = new Frame(0, 1);
			Target.BowlFrame(frame);

			Assert.AreEqual(frame.FirstThrow + frame.SecondThrow, Target.Score);
		}

		[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
		public void With_Throws_2_And_3_Results_In_Score_5()
		{
			var frame = new Frame(2, 3);
			Target.BowlFrame(frame);

			Assert.AreEqual(frame.FirstThrow + frame.SecondThrow, Target.Score);
		}

		[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
		public void Sets_Score_To_2_After_2_Frames_With_Score_Of_1_Each()
		{
			var frame = new Frame(1, 0);
			Target.BowlFrame(frame);
			Target.BowlFrame(frame);

			Assert.AreEqual(frame.Total + frame.Total, Target.Score);
		}

		[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
		public void Sets_Score_To_Twenty_After_Spare_Then_Five_Then_Zero()
		{
			var firstFrame = new Frame(9, 1);
			var secondFrame = new Frame(5, 0);

			Target.BowlFrame(firstFrame);
			Target.BowlFrame(secondFrame);

			Assert.AreEqual(20, Target.Score);
		}

		[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
		public void Sets_Score_To_25_After_Strike_Then_Five_Five()
		{
			var firstFrame = new Frame(10, 0);
			var secondFrame = new Frame(6, 4);

			Target.BowlFrame(firstFrame);
			Target.BowlFrame(secondFrame);

			Assert.AreEqual(30, Target.Score);
		}
	}

	public class BowlingScoreCalculator
	{
		private readonly Frame[] _frames = new Frame[10];

		private int _currentFrame;

		private Frame LastFrame { get { return _frames[_currentFrame - 1]; } }

		public int Score { get; private set; }

		public void BowlFrame(Frame frame)
		{
			AddMarkBonuses(frame);

			Score += frame.Total;
			_frames[_currentFrame++] = frame;
		}

		private void AddMarkBonuses(Frame frame)
		{
			if (WasLastFrameAStrike()) Score += frame.Total;
			else if (WasLastFrameASpare()) Score += frame.FirstThrow;
		}

		private bool WasLastFrameAStrike()
		{
			return _currentFrame > 0 && LastFrame.IsStrike;
		}
		private bool WasLastFrameASpare()
		{
			return _currentFrame > 0 && LastFrame.IsSpare;
		}
	}
}

If you look at the nested test class corresponding to the BowlFrame method, you’ll notice that I have a class level property called Target and that I have a method called “BeforeEachTest” that runs at the start of each test and instantiates Target. I used to be more of a purist in wanting all unit test methods to be completely and utterly self-contained, but after a while, I couldn’t deny the readability of this approach.

Using “Target” cuts out at least one line of pointless (and repetitive) instantiation inside each test and it also unifies the naming of the thing you’re testing. In other words, throughout the entire test class, interaction with the class under test is extremely obvious. Another ancillary benefit to this approach is that if you need to change the instantiation logic by, say, adding a constructor parameter, you do it one place only and you don’t have to go limping all over your test class, doing it everywhere.

I highly recommend that you consider adopting this convention for your tests.

Use the test initialize (and tear-down) for intuitive naming and semantics.

Along these same lines, I recommend giving some consideration to test initialization and tear-down, if necessary. I name these methods BeforeEachTest and AfterEachTest for the sake of clarity. In the previous section, I talked about this for instantiating target, but this is also a good place to instantiate other common dependencies such as mock objects or to build friendly instances that you pass to constructors and methods.

This approach also creates a unified and symmetric feel in your test classes and that kind of predictability tends to be invaluable. People often debug production code, but are far more likely to initially contemplate unit tests by inspecting them, so predictability here is as important as it is anywhere.

Keep Your Mind on AAA

AAA. “Arrange, Act, Assert.” Think of your unit tests in these terms at all times and you’ll do well (I once referred to this as “setup, poke, verify.“). The basic anatomy of a unit test is that you setup the world that you’re testing to exist in some situation that matters to you, then you do something, then you verify that what you did produced the result you expect. A real world equivalent might be that you put a metal rod in the freezer for 2 hours (arrange), take it out and stick your tongue on it (act), and verify that you can’t remove your tongue from it (assert).

If you don’t think of your tests this way, they tend to meander a lot. You’ll do things like run through lists of arguments checking for exceptions or calling a rambling series of methods to make sure “nothing bad happens.” This is the unit test equivalent of babbling, and you don’t want to do that. Each test should have some specific, detailed arrangement, some easily describable action, and some clear assertion.

Keep your instantiation logic in one place

In a previous section, I suggested using the test runner’s initialize method to do this, but the important thing is that you do it, somehow. I have lived the pain of having to do find and replace or other, more manual corrections when modifying constructors for classes that I was instantiating in every unit test for dozens or even hundreds of tests.

Your unit test code is no different than production code in that duplication is your enemy. If you’re instantiating your class under test again and again and again, you’re going to suffer when you need to change the instantiation logic or else you’re going to avoid changing it to avoid suffering, and altering your design to make copy and paste programming less painful is like treating an infected cut by drinking alcohol until you black out and forget about your infected cut.

Don’t be exhaustive (you don’t need to test all inputs — just interesting ones)

One thing I’ve seen occasionally with people new to unit testing and especially getting their heads around TDD is a desire to start exhaustively unit testing for all inputs. For instance, let’s say you’re implementing a prime number finder as I described in my Pluralsight course on NCrunch and continuous testing. At what point have you written enough tests for prime finder? Is it when you’ve tested all inputs, 1 through 10? 1 through 100? All 32 bit integers?

I strongly advise against the desire to do any of these things or even to write some test that iterates through a series of values in a loop testing for them. Instead, write as many tests as you need to tease out the algorithm if you’re following TDD and, in the end, have as many tests as you need to cover interesting cases that you can think of. For me, off the top (TDD notwithstanding), I might pick a handful of primes to test and a handful of composite numbers. So, maybe one small prime and composite and one large one of each that I looked up on the internet somewhere. There are other interesting values as well, such as negative numbers, one, and zero. I’d make sure it behaved correctly for each of these cases and then move on.

It might take some practice to fight the feeling that this is insufficient coverage, but you have to think less in terms of the set of all possible inputs and more in terms of the set of paths through your code. Test out corner cases, oddball conditions, potential “off by one” situations, and maybe one or two standard sorts of inputs. And remember, if later some kind of bug or deficiency is discovered, you can always add more test cases to your test suite. Your test suite is an asset, but it’s also code that must be maintained. Don’t overdo it — test as much as necessary to make your intentions clear and guard against regressions, but not more than is needed.

Use a continuous testing tool like NCrunch

If you want to see just how powerful a continuous testing tool is, check out that Pluralsight video I did. Continuous testing is game changer. If you aren’t familiar with continuous testing, you can read about it at the NCrunch website. The gist of it is that you get live, real-time feedback as to whether your unit tests are passing as you type.

Let that sink in for a minute: no running a unit test runner, no executing the tests in the IDE, and not even any building of the code. As you type, from one character to the next, the unit tests execute constantly and give you instantaneous feedback as to whether you’re breaking things or not. So, if you wander into your production code and delete a line, you should expect that you’ll suddenly see red on your screen because you’re breaking things (assuming you don’t have useless lines of code).

I cannot overstate how much this will improve your efficiency. You will never go back once you get used to this.

Unit Tests Instead of the Console

Use unit tests instead of the console or whatever else you might use to do experimentation (get comfortable with the format of the tests). Most developers have some quick way of doing experiments — scratchpads, if you will. If you make yours a unit test project, you’ll get used to having unit tests as your primary feedback mechanism.

In the simplest sense, this is practice with the unit test paradigm, and that never hurts. In a more philosophical sense, you’re starting to think of your code as a series of entry points that you can use for inspection and determining how things interact. And that’s the real, longer term value — an understanding that good design involves seams in the code and unit tests let you access those seems.

Get familiar with all of the keyboard shortcuts

Again, this is going to vary based on your environment, but make sure to learn whatever keyboard shortcuts and things your IDE offers to speed up test management and execution. The faster you are with the tests, the more frequently you’ll execute them, the more you’ll rely on them, and the more you’ll practice with them.

Your unit test suite should be a handy, well-worn tool and a comfortable safety blanket. It should feel right and be convenient and accessible. So anything you can do to wear it in, so to speak, will expedite this feeling. Take the time to learn these shortcuts and practice them — you won’t regret it. Even if you have a continuous testing tool, you can benefit from learning the most efficient way to use it. Improvement is always possible.

General Advice

Cliche as it may sound and be, the best tip I can give you overall is to practice, practice, practice. Testing will be annoying and awkward at first, but it will become increasingly second nature as you practice. I know that it can be hard to get into or easy to leave by the wayside when the chips are down and you’re starting at a deadline, but the practice will mitigate both considerations. You’ll grow less frustrated and watch the barriers to entry get lower and, as you get good, you won’t feel any inclination to stop unit testing when the stakes are high. In fact, with enough practice, that’s when you’ll feel it’s most important. You will get there with enough effort — I promise.

By

Intro to Unit Testing 8: Test Suite Management and Build Integration

It’s been over a month now since my last post in this series, and for that I sort of apologize. I think I’ve been channelling all of my instructive energy into my now-finished Pluralsight course, leaving the blog largely for opinions, screeds, and a random hiring announcement. So, let’s get back on track and wrap this thing up. I have this post and another one slated and then we can call it a day.

So far, I’ve talked quite a lot about how and when (and when not) to write unit tests. I’ve offered up some techniques for helping you isolate the classes that you want to test, including the use of test doubles. And finally, I offered some advice on how to get people to leave you alone and let you write tests. So now I’d like to turn and offer some advice beyond just writing the things. You need to live with them, manage them and leverage them over the course of time.

Managing the Suite

You’ve built them. So, now what? At some point, you’ll wonder exactly when you’re getting started. For the first few or even few dozen classes you test, you’ll alternate between some exasperation at spending extra time doing something new and satisfaction at, well, doing something new. But then, at some point, you’ll be sitting around and notice that your test suite has like 400 tests and think, “wow, that’s a lot of code… do I really want all this?”

That feeling will hit you even harder when you go to change something under a tight deadline and your real quick change makes a test go red. You’re pretty sure the test is broken because it was testing the old way of doing things, so you really just want to comment out the test and you wonder why it’s such a pain to change the code. Why do you have to waste so much time to change one line of code?

The answer to these questions lies in practice but also effective test suite management. If you let the unit test suite become a boat anchor, it will drag you down. Your frustration will be real and reasonable, rather than just a temporary product of you being in a hurry and unfamiliar with working in a code base under test. You need to take care to prevent this from happening, and I’m going to tell you how in this section.

Name Your Tests Clearly and Be Wordy

When you’re writing a unit test, you’re looking at code. But when you’re running your test suite, you aren’t most of the time, and when you’re trying to understand why a run or a build failed, you’re never looking at code. When the test suite is failing, you don’t want to waste time figuring out why. And having to open the IDE, navigate to the test, read the code and figure out the problem is a waste of time.

Don’t give your test methods names like “Test24” or “CustomerTest” or something. Instead, give them names like “Customer_IsValid_Returns_False_When_Customer_SocialSecurityNumber_Is_Empty”. That method name may seem ridiculous, especially if you’re used to giving methods short names, but trust me, you’ll be thankful for it. When your build is failing, which of these method names would you rather see an X next to? Would you rather be saying “looks like test 24 is failing,” or would you rather be saying, “oh, I wonder why someone made it so that an empty SSN is now considered valid?” If you say the first one, you’re lying.

This may seem unimportant in the scheme of things, but it’s the difference between associating frustration and confusion with your test suite and viewing it as a warning system for potentially undesirable changes. The test suite needs to be communicating clearly to you what’s wrong. Descriptive test names help do that and they help you identify whether it’s your code or the test itself that needs to be changed in the face of changing requirements.

Make Your Test Suite Fast

Ruthlessly delete and cull out slow tests. I can’t say it more plainly than that. A good test suite runs in seconds, max. If yours starts to take minutes, or God forbid, hours, then it’s rotting and becoming useless to you. Think of it this way — if it takes several minutes to run the test suite, how often are you going to do it? Every time you make a change, or just when you check in? If it takes hours, will you ever run it voluntarily?

If your test suite takes a long time to run, nobody will run it. Short feedback loops are of paramount importance to developers, and we optimize for efficiency. If the unit test suite is inefficient, we’ll find other ways to get feedback. As such, it is incredibly important to ensure that your test suite always runs quickly. Treat it as if the rest of your team were waiting for any legitimate excuse not to use the test suite, and don’t let inefficiency be that excuse.

Test Code is First Class Code

A common mistake that I see among those relatively new to testing is test code that’s something of a mess. The code will be brittle, heavily duplicated, weird, and hard to read. In short, your tests and test classes will contain code that you wouldn’t be caught dead putting into production.

Don’t do that. Treat your test code as if it were any other code. Eliminate duplication. Factor common functionality out into methods. Be descriptive with naming and with the flow of the method. Keep that code clean. I get that there’s a desire when it comes to testing to make as much of a mess as possible in the “bug bash” sense of throwing chaos at the situation and proving that your code can handle it, but the chaos needs to be controlled, and you can control it by keeping your test code clean and maintainable. If the tests are clean and easy to maintain, people won’t mind going in periodically to make an adjustment. If they’re unruly, people will get annoyed and comment them out or stop running them.

Have a Single Assertion per Test

This is a subtle one, but it also goes toward maintainability. If you start writing tests that have 20 asserts in them, you may feel good that you’re exercising a whole section of the code, but really you’re making things hard for yourself later. If all 20 tests pass (or at least the first 19), then all will be executed. But if the first one fails, none of the rest get executed. This means that in test methods with lots of asserts, it’s not always clear where they’re failing, which means it’s not always clear what’s going wrong.

In order for your test suite to be an asset, it has to be a clear indicator of what’s going wrong. Which would you find more useful in your car: a series of many different lights with helpful diagrams that lit up to indicate a problem, or one unlabeled red light that came on whenever anything at all was wrong? If you had that latter light and it could mean anything from your gas being low to you being out of wiper fluid to imminent destruction of your transmission, I bet you’d just start ignoring it after a while.

Don’t Share State Between Your Tests

There is no more surefire way to drive yourself insane at some future date than by storing some kind of application state among unit tests being executed. What I mean is if you have some test A that declares sets a global counter variable to 1, and then you have another test B that depends on the global counter being set to 1 in order for it to succeed, you are in for a world of hurt.

The problem is that there is no guarantee that the unit test runner will execute the tests in any particular order. What’s likely to happen is that your tests get executed in a particular order whenever you run them on your machine, so everything goes fine. But when the build machine runs them they fail. Weird. So you check them on your friend Bob’s machine, and they pass there. But on Alice’s machine, they fail. If you didn’t already know why this was happening because I just told you, can you imagine how much of your hair you’d pull out? You’d probably be checking the IDE version on those machines, compiler information, OS settings, and God only knows what else. It’d be a wild goose chase.

And imagine if it worked on everyone’s machine initially and then six months later started failing occasionally on the build machine. Machine isn’t the only failing dimension — there’s also time. So please, whatever you do, do not have your unit tests depend on the execution of a previous test. This practice, more than any other, is likely to lead to a rage-quitting of unit testing as a practice where you simply take all of them out of the build.

Encourage Others and Keep Them Invested

This sounds like a strange one to round out the section, but it’s important. If you’re the only one fighting the good fight with unit tests, it becomes daunting and exasperating. Everyone else’s reaction to failing tests is annoyance and they’re waiting for excuses just to stop altogether. You wind up feeling that you’re in an adversarial relationship with the team (I speak from experience here). But if you get others to buy in, you’re not shouldering the burden alone and you have help keeping the suite healthy and helpful.

Build Integration

When you first start out unit testing, the tests will be sort of disorganized and haphazard. You’ll write a few to get the hang of it and then maybe discard them. After a bit of that, you’ll start checking them into your solution (unless you’re an incorrigible weirdo or a liar). You do that, and the suite grows and, ideally, everyone is running it locally to keep things clean and be notified of potential breaking changes.

But you have to take it beyond that at some point if you want to realize the full value of the unit tests. They can’t just be a thing everyone remembers to do locally on pain of nagging emails or because someone will buy the team donuts or some other peer-pressure-oriented demerit system. Failing unit tests have to have real (read: automated) consequences. And the best way to do this is to make it so that failing unit tests mean a failing build.

If you’re in a shop that’s not as formal, this may be difficult at first. One handicap may be that you’re reading this and saying “what do you mean by ‘the build?'” If what you do is write code and take some kind of executable out of your project’s output directory on your machine and push it to a server or to your users, you’ve got some work to do before you think about integrating unit tests. You need a build. A build is an automated process by which your source code is turned into a production-ready, deployable package. And it’s automated in the sense that it doesn’t involve you hitting Ctrl-Shift-B or Ctrl-F6 or whatever you do manually in your IDE to build. The Build, with a capital B, is a process that checks your code out of source control, builds it, runs checks and whatever else is necessary, perhaps increments the versioning of the executables, etc., and then spits out the final product that will be pushed to a server or burned onto a DVD or whatever. If you want to read more about build tools, you can google around about TeamCity, CruiseControl, TFS, FinalBuilder, Jenkins etc. And you don’t have to use a product like that — you can create your own using shell scripts or code if you choose.

Because of all the different options when it comes to programming languages, unit test technologies and build tools, I’m not going to offer a tutorial on how to integrate unit tests into your build. To be comprehensive, I’d need to give dozens of such tutorials. But what I will say is that your integration is going to take the same basic format no matter what tools you’re using. The build is a series of steps that passes if everything goes smoothly and the deliverables are ultimately generated. If a step in the build fails, then the build itself fails. What you need to do is add a step that involves running the unit tests. With this in place, you’re creating a situation where any failing unit test means that the entire build fails.

Conceptually, this is pretty straightforward. Unit test runners can be run in command line fashion and they’ll generate a return value of some kind. So the build tool needs to examine the test runner’s output for an error code. If it finds one, it puts the brakes on the whole operation.

It may seem extreme at first to torpedo the whole build because of a failing unit test, but when you think about it, what else should possibly happen? Why would you want a process that allowed you to ship code knowing that it was defective in a way that it didn’t used to be? That’s amateur hour. And, what’s more is that if your team starts understanding that failed unit tests mean a failed build they’ll be sure to run the tests before check-in so that they don’t fail. It will become a natural part of your process, and the quality of your software will be dramatically improved for it.

By

Intro to Unit Testing 7: Overcoming Inertia and Objections

In this series so far, I’ve introduced the concept of unit testing, talked about how to avoid early failures and frustrations, and then dived more in depth into the subject. But now that you’re familiar with the topic and reasonably well versed in it (particularly if you’ve been practicing over the course of this series), I’d like to use that familiarity to discuss why unit testing makes sense in terms that you now better understand. And beyond that, I’d like to discuss how you can use this sense to overcome inertia and even objections that others may have.

The Case for Unit Tests

Alright, so this is the part where I offer you a laundry list, right? This is where I say that unit tests improve code quality, document your code, exercise your API, promote (or at least correlate with) good design, help you break problems into manageable chunks, expose problems earlier when finding them is cheaper, and probably some other things that I’m forgetting right now. And I believe that all of these things are true. But after a number of years of faithfully writing unit tests and practicing test-driven development (TDD), I think that I can offer those as appetizers or footnotes in the face of the real benefit: they allow you to refactor your code without fear or hesitation.

First to Contend With: Inertia

Inertia could probably be categorized as an objection, but I’m going to treat it separately since it manifests in a different way. Objections to doing something are essentially counterarguments to it. They may be lame counterarguments or excellent counterarguments, but either way, they take an active position. Inertia doesn’t. It’s either ambivalence or passive-aggressive resistance. To illustrate the difference, consider the following exchanges:

Alice: We should start trying to get our code under test.
Bob: Unit testing is stupid and a complete waste of time.

versus

Alice: We should start trying to get our code under test.
Bob: Yeah, that’d be nice. We should do that at some point.

The former is a strident (and obtuse) counterargument while the latter is an example of inactivity by inertia. In the second exchange, Bob either thinks unit testing is a good idea — but just not now — or he’s blowing sunshine at Alice while subtly saying, “not now,” so that she’ll leave him alone and stop stumping for change (i.e. the passive-aggressive approach).

In either case, the best way to overcome inertia is to counteract it with activity of your own. Inertia is the byproduct of the developer (and human in general) tendency to get stuck in a rut of doing the comfortable and familiar, so overcoming it within your group is usually just a matter of creating a new rut for them. This isn’t necessarily an easy thing to do. You’ll have to write the tests, make sure they stay up to date, demonstrate the benefits to anyone who will listen, and probably spend some time teaching others how to do it. But if you persevere and your only obstacle is inertia, sooner or later test writing will become the new normal and you’ll get there.

Red Herrings and Stupid Objections

RedHerring

Around a year ago, I blogged about a guy who made a silly claim that he wrote a lot of unit tests but didn’t check them in. The reasoning behind this, as detailed in the post, was completely fatuous. But that’s to be expected since the purpose of this claim wasn’t to explain a calculated approach but rather to cover a lack of knowledge — the knowledge of how to write unit tests.

This sort of posturing is the province of threatened Expert Beginners. It’s the kind of thing that happens when the guy in charge understands that unit testing is widely considered to be table stakes for software development professionalism but has no idea how it works. As such, he believes that he has to come up with a rationale for why he’s never bothered to learn how to do it, but he’s handicapped in inventing an explanation that makes sense by virtue of the fact that he has no idea what he’s talking about. This results in statements like the following:

  • Unit tests prevent you from adapting to new requirements.
  • Testing takes too much time.
  • It wouldn’t work with our style of writing code.
  • You’re not going to catch every bug, so why bother?
  • Writing all of your tests before writing any code is dumb.
  • Management/customers wouldn’t like it and wouldn’t want to pay for it.

I obviously can’t cover all such possible statements, but use the smell test. If it sounds incredible or stupid, it probably is, and you’re likely dealing with someone who is interested in preserving his alpha status more than creating good work product. To be frank, if you’re in an environment like that, you’re probably best off practicing your craft on the sly. You can write tests and keep quiet about it or even keep them in your own personal source control (I have done both at times in my career when in this situation) to prevent people from impeding your own career development. But the best longer term strategy is to keep your eyes and ears open for other projects to work on where you have more latitude to set policies. Do an open source project in your spare time, grab an opportunity to develop a one-off tool for your group, or maybe even consider looking for a job at an organization a little more up to speed with current software development practices. You can stay and fight the good fight, but I don’t recommend it in the long run. It’ll wear you down, and you’re unlikely to win many arguments with people that don’t let a lack of knowledge stop them from claiming expertise on a subject.

I Don’t Know How to Unit Test

With inertia and silliness set aside, let’s move on to legitimate objections to the practice. At first blush, you may be inclined to scoff at the objection, “I don’t understand it,” particularly in an industry often dominated by people unwilling to concede that their knowledge is lacking in the slightest bit in any way whatsoever. But don’t scoff — this is a perfectly reasonable objection. It’s hard and often unwise simply to start doing something with professional stakes when you don’t know what you’re doing.

If the people around you admit to not knowing how to do it, this earnest assessment often indicates at least some willingness to learn. This is great news and something you can work with. Start teaching yourself how to do it so that you can help others. Watch Pluralsight videos and show them to your coworkers as well. If your group is amenable to it, you can even set aside some time to practice as a group or to bring in consultants to get you off to a good start. This is an objection that can easily be turned into an asset.

It Doesn’t Work With the Way I Code

I originally specked out this post in the series because of a comment in the very first post, and this section is the one that addresses that comment. Arguments of this form are ones I’ve heard quite frequently over the years, and the particulars of the coding style in question vary, but the common thread is the idea that unit testing creates friction with an established routine. This isn’t the same as “I don’t know how to unit test” because the people who say this generally do know how to write tests — they must or else they wouldn’t know anything about the subject and would make up Expert-Beginner-style stupid excuses. It also isn’t the same as the inertia objection because they’re saying, “I was willing to try, but I find that this impedes me,” rather than, “meh, I dunno, I like my routine.”

My short answer to someone who has this objection is, to put it bluntly, “change the way you code.” Whatever the specifics of your approach, when you’re done, you don’t wind up with a fast-executing safety net of tests that you trust — tests that document your intentions, keep your code flexible, help prevent regressions, and force your design to be easy to use and decoupled. People who code differently than you do, in that they unit test, do wind up with those things. So figure out a way to be one of those people.

On a deeper level, though, I understand this objection because it hits closer to home. I was never the type to bluster like an Expert Beginner, nor am I prone in the slightest to inertia. (I am pretty much the opposite, frequently biting off more than I can chew.) The other objections never really applied to me, but this one did both prior to starting to write tests and prior to adopting TDD as my exclusive approach to developing. You can read that latter perspective from my very early days of blogging. Years ago, I chafed at the prospect of unit testing because spending the extra time took me out of the ‘flow’ of my coding, and I balked at TDD because I thought “why would I start writing unit tests for this code when it might be refactored completely later?” In other words, neither one of these worked with my approach.

But in both cases, I relented eventually and changed the way I coded. I didn’t just one day say, “well, I guess I’ll just start writing code differently from now on.” What happened instead was that I realized that a lot of really bright people and prominent names in the industry had coding and design styles that were compatible with writing tests so it was at least worth trying things their way. It wasn’t a matter of doing something because the cool kids were doing it or resolving to change my ways. Rather, I thought to myself, “I’ll see what all the fuss is about. Then, I’ll either like it or go back to my way of doing things armed with much better arguments as to why my way is better.” So I poured myself into a different way of doing things, forced myself to keep at it even though it was slow and awkward, and, wouldn’t you know it, I eventually came to prefer it.

Convincing people in your group to follow that lead is not going to be easy, but it is doable. The best way to do it is to earn their respect and show them results. If your code is cleaner and freer of bugs and your past projects are easy to adapt and modify, people are going to notice and ask you what your secret is, and you’ll be in a position to show them. It may seem improbable to you now, but you can go from being a person quietly teaching yourself to unit test on the side to one of the key leaders in a software department relatively quickly. You just have to commit yourself to continuous improvement and moving toward proven, effective techniques. Unit testing is just one such technique, but it is a powerful and important one.