How You’re Probably Misunderstanding TDD
Editorial note: I originally wrote this post for the TechTown blog. You can check out the original here, at their site. While you’re there, take a look around at their training courses.
Let’s get something out of the way right up front. You might have extensive experience with test driven development (TDD). You might even practice it routinely and wear the red-green-refactor cadence like a comfortable work glove. For all I know, you could be a bonafide TDD expert.
If any of that describes you, then you probably don’t actually misunderstand TDD. Not surprisingly, people that become adept with it, live, and breathe it tend to get it. But if that introductory paragraph doesn’t describe you, then you probably have some misconceptions.
I earn my living doing a combination of training and consulting. This affords me the opportunity to visit a lot of shops and talk to a lot of people. And during the course of these visits and talks, I’ve noticed an interesting phenomenon. Ask people why they choose not to use TDD, and you rarely hear a frank, “I haven’t learned how.”
Instead, you tend to hear dismissals of the practice. And these dismissals generally arise not from practiced familiarity, but from misunderstanding TDD. While I can’t discount the possibility, I can say that I’ve never personally witnessed someone demonstrate an expert understanding of the practice while also dismissing its value. Rather, they base the dismissal on misconception.
So if you’ve decided up-front that TDD isn’t for you, first be sure you’re not falling victim to one of these misunderstandings.
People Do TDD To Avoid Architecture and Design
I’ll start with what usually constitutes a borderline willful misunderstanding. In other words, I think people who espouse this idea often present it as a strawman argument more than a genuine misunderstanding. They say that TDD means you can’t reason up-front about software architecture or design. Then they say they need to reason up front about these things, so they don’t do TDD.
To the extent that people genuinely misunderstand this, I suspect any confusion arises from the occasional use of “test driven design” as a synonym for test driven development. It might also come from people who practice TDD offering the YAGNI acronym as guidance.
But make no mistake. In terms of design and code, TDD only requires that you create a failing test before writing production code. How much planning you do before writing that code (and the preceding failing test) remains your business. And for non-trivial projects, you should definitely do some planning.
You Write All of Your Tests Before Writing Any Code
Going in order of ease of misunderstanding, I’ll move onto one less likely to happen willfully. I’m talking about misunderstanding TDD by believing it requires you to write all of the tests for your class or module before writing any code.
Usually, I encounter this after discussing the practice for a bit. They’ll acknowledge the potential benefits and talk about maybe someday making time to learn. And then they’ll talk skeptically about the potential waste of writing a bunch of tests that might not wind up being needed. I’ll press at this point, and realize they think TDD involves writing out every possible test case the way QA might write out a test plan.
At this point, I surprise them by agreeing with them. Anyone doing that would be wasting time! Luckily, TDD doesn’t call for this. You write one test at a time — only the test that expresses what you want your code to do, but that it doesn’t yet do.
TDD Replaces QA
Now we arrive at forms of misunderstanding TDD that I understand completely. In fact, I even see some relatively novice practitioners of TDD have these misconceptions, at times. Take, for instance, the idea that a development team can practice TDD and thus replace the need for QA personnel.
When I encounter this belief in a team or department, it usually creates needless tension around TDD. Management misguidedly eyes it as a potential cost savings measure, while QA understandably views it as an existential threat. Calm down, everyone. TDD does not, in any way, replace the need for QA work.
Using TDD to develop helps steer a good modular design, and it leaves a safety net of regression tests in its wake. But it (frequently) doesn’t provide the kinds of user-oriented, system-wide tests that a QA group would execute. And, it obviously can’t perform exploratory testing. Everyone’s job is safe.
TDD Provides an Exhaustive Set of Unit Tests
The last section dovetails nicely into this one. Just as misunderstanding TDD can lead teams to believe it replaces QA, it can also lead them to believe it produces a bullet-proof test suite. But it does not.
I frequently encounter this misunderstanding when teams have begun to adopt the practice. They test drive their code for a while, building out a nice test suite. And then, it strikes — some kind of bug in a higher environment. “Why didn’t the TDD tests catch this?! I knew this was a waste of time!”
While I completely understand the frustration, it’s misplaced when directed at TDD. Driving your development with tests means that you write the tests needed to finish your development. TDD does not call for writing tests for every conceivable boundary case or for every conceivable input. It certainly does not involve things like load tests and smoke tests.
This sort of testing is important, and you do need it. But it represents an activity separate from TDD, in which you write tests not intended to drive changes in production code.
TDD Is Primarily a Testing Activity
This last TDD misunderstanding might spark some spirited discussion. I suspect that even some experienced TDD practitioners might disagree with my take here. But I’ll offer it nonetheless. I submit that TDD is not, primarily, a testing activity.
To understand my reasoning, consider the name itself: test-driven development. Notice that they don’t call it something like development-focused testing. The actual name and the process that flows from it involves using tests as a driving aid for software development. With test-driven development, you follow a software development methodology that happens to produce a nice unit test suite as a byproduct. You could toss all of the tests after writing and keep only the production code and realize benefit in terms of design flexibility. Obviously, I don’t recommend this approach since the tests have value.
But whether you take issue with what I say here or not, bear it in mind as you practice TDD or socialize it with others. It helps clear up the other misunderstandings that call for TDD to address all of the department’s testing needs in one fell swoop.
Avoid Misunderstanding TDD: What It Really Involves
I’ve spent a good bit of time now talking about what TDD isn’t. So, I’ll close by talking about what it actually is. Taken from the link at the start of the post, consider Uncle Bob’s “three laws” of TDD.
- You are not allowed to write any production code unless it is to make a failing unit test pass.
- You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
- You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
Adhering to these rules defines a process abbreviated as “red-green-refactor.” You see a way in which you want the production code to change — a shortcoming. Before you can address the shortcoming, you invoke law (1) and express the shortcoming with a failing test — the simplest test you can write (law 2). This defines the “red.”
With that out of the way, you move onto green. Invoking law (3), you do the simplest thing you can to get your erstwhile failing test to pass while keeping all of your other tests passing. This defines “green.”
And, finally, when all tests pass, you can refactor your code as long as all tests stay green, defining the “refactor” part. (Refactoring constitutes a different activity than “write production code” from law (1), so you refactor with your existing test suite).
That’s it. As you can see, TDD makes no sweeping proclamations about QA strategy and it offers no opinion on issues like architecture. It just gives you a simple protocol for writing and cleaning your code.
But with that simple protocol comes a great deal of power and effectiveness. So it’s worth working through misunderstandings to get there.
When trying to do TDD by the book with some junior developers several years ago, something we wrestled with was tests being the boat anchor as the architecture evolved. Some context: we were working our way through a code kata, where we tried to emulate a typical dev cycle where we don’t peek at all the requirements in the kata, because requirements change. We ended up doing the red/green/refactor cycle, only building up as much code/API as was needed for the requirements we had — no gold-plating allowed. Then, as we read the next requirement, our API needed to change… Read more »
I’m putting this into the backlog of reader questions, because, if I’m understanding correctly, I could probably spend a whole post on this. To paraphrase, the generalized idea is that you make a series of design decisions for the moment and then found the unit tests to be a source of friction for when you needed to evolve the design? I think that what you did represents a good exercise for trying to break people of the BDUF habit. I also think that designing APIs in such a way as to minimize disruption from future change is a hard-won skill.… Read more »
That’s an interesting observation, Geoff. I can think of three reasons this might happen:
1) The signature of a request or response object changed, resulting in compile errors in your test. This is where refactor tools are helpful. Sometimes you just have to make a bunch of cookie-cutter changes, but you can also refactor a method signature to add, change, or remove parameters.
2) SRP Violations. The code has gotten too complex and needs to be refactored into smaller pieces, each with a single responsibility.
3) Over-testing — This could be the result of #2 (srp), or something else. It’s good to try to design your tests so only one tests breaks if there is a requirement change. This is not always easy or even practical. You want your tests to be against isolated behavior, so they act as a specification for the code’s behavior. You want to avoid “white box” testing, where you are throwing multiple combinations and permutations of behavior across multiple tests. For example, if you are persisting a data object where any field may be null, it’s best to write two… Read more »
I love the concept of TDD, and every time I actually consciously sit down and say, “OK, I’m gonna write the tests first”, it’s gloriously awesome.
Sadly, those times are rare. I’m just horrible at remembering to write them, dammit. Until I’m halfway through whatever fix I’m implementing, and at that point I usually shrug, say to myself, “eh, next time” and keep going. Momentum, and all that.
You are not alone. I think if you start any project with writing down the features in the Gherkin format as a habit, it may help.
Hrm. Good idea. I saw a webinar on BDD a few weeks back, and that also mentioned Gherkin. We are a C# shop, and I don’t know if we can translate Gherkin directly, but just the act of writing it down might serve to nudge me in the right direction. Thanks for the tip!
I’ve used Gherkin in a fair few C# shops I’ve worked in. If you want a tool that will translate your Gherkin specs into executable code (like Cucumber does) then have a look at Specflow (http://specflow.org/). Of course, the Given, When and Then in Gherkin translates nicely into the Arrange, Act, Assert pattern a lot of people use when writing xUnit style tests. Jotting down a few tests before you start is great habit to get into. I find that when I do this I’m just expressing what I’m thinking subconsciously when I start coding, which is ‘What do I… Read more »
I write my unit tests more or less the same way I write non-test code. I start with a bunch of // TODO comments with what I want to do, and then I wrote code and remove the TODO, leaving the comment.
With unit tests, I can do that, or do Assert.inconclusive (I use NUnit primarily and love it) and then replace those with actual useful asserts.
My experience on the training side of TDD is that there’s a bit that flips at some point. Until the flipping of said bit, people do TDD when they have time or when they think of it, but when the pressure is on and they have to deliver, they revert to what they’re used to. The bit flips when there’s no “revert” and it never occurs to you to write code without test driving the way it (hopefully) never occurs to you to write code for hours without seeing if it compiles. It’s just a question of forcing your own… Read more »
I hear ya. I got to that point with writing non-test code where I was always thinking, “OK, what do I need to mock here? What do I need to abstract here? If I’m dealing with external file operations, do I need an IFileService? If I’m working with threads, do I need an IThreadService? And so on. Basically, I’m always thinking about what I need to abstract; what I need to inject, and so on. It took me a couple of years to get past the whole, “DI/IOC? Oo, ick!” mentality, but now I can’t imagine NOT doing it. EVER.… Read more »
Wow, it must have been tough to hand a codebase you had responsibility for over to someone else, while continuing to work around it (manage it). I don’t recall every having to do that myself. I always bounced around too much 🙂
It was weird. It was really, really weird. This is probably the best work I’ve ever done, and while far from perfect, it’s the closest thing to anything I can call “my baby”. I actually inherited it from another person a year ago, and I spent the next 6 months Enterprise-ifying it, so I got pretty attached. Speaking of which, I just finished reviewing a pull request for the guy on my team that took it over from me. A lot of the changes are not the way I’d do things, but it get the job done and so far… Read more »
I’ll vouch for BDD and SpecFlow, it’s greatly improved how I write tests. What gets me is when I’m the only one who does TDD on a team of other coders who don’t and maybe won’t write testable code. When I’m writing new code, I use TDD. When I have to write on top of other code which is non-testable, the time and risk to refactor usually takes precedence. The tests become integration tests which are much much harder to maintain since a simply change breaks all the tests. I’m a Mockist all the way!!! Generally, its easier and more… Read more »
Your approach sounds pretty similar to mine on all fronts. And, good turn of phrase there on QA — “why would you want to take it all on yourself?”