Intro to Unit Testing 5: Invading Legacy Code in the Name of Testability
If, in the movie Braveheart, the Scots had been battling a nasty legacy code base instead of the English under Edward Longshanks, the conversation after the battle at Stirling between Wallace and minor Scottish noble MacClannough might have gone like this:
Wallace: We have prevented new bugs in the code base by adding new unit tests for all new code, but bugs will still happen.
MacClannough: What will you do?
Wallace: I will invade the legacy code, and defeat the bugs on their own ground.
MacClannough (snorts in disbelief): Invade? That’s impossible.
Wallace: Why? Why is that impossible? You’re so concerned with squabbling over the best process for handling endless defects that you’ve missed your God-given right to something better.
Goofy as the introduction to this chapter of the series may be, there’s a point here: while unit testing brand new classes that you add to the code base is a victory and brings benefit, to reap the real game-changing rewards you have to be a bit of a rabble-rouser. You can’t just leave that festering mass of legacy code as it is, or it will generate defects even without you touching it. Others may scoff or even outright oppose your efforts, but you’ve got to get that legacy code under test at some point or it will dominate your project and give you unending headaches.
So far in this series, I’ve covered the basics of unit testing, when to do it, and when it might be too daunting. Most recently, I talked about how to design new code to make it testable. This time, I’m going to talk about how to wrangle your existing mess to start making it testable.
Easy Does It
A quick word of caution here before going any further: don’t try to do too much all at once. Your first task after reading the rest of this post should be selecting something small in your code base to try it on if you want to target production and get it approved by an architect or lead, if that’s required. Another option is just to create a playpen version of your codebase to throw away and thus earn yourself a bit more latitude, but either way, I’d advise small, manageable stabs before really bearing down. What specifically you try to do is up to you, but I think it’s worth proceeding slowly and steadily. I’m all about incremental improvement in things that I do.
Also, at the end of this post I’ll offer some further reading that I highly recommend. And, in fact, I recommend reading it before or as you get started working your legacy code toward testability. These books will be a great help and will delve much further into the subjects that I’ll cover here.
Test What You Can
Perhaps this goes without saying, but let’s just say it anyway to be thorough. There will be stuff in the legacy code base you can test. You’ll find the odd class with few dependencies or a method dangling off somewhere that, for a refreshing change, doesn’t reference some giant singleton. So your first task there is writing tests for that code.
But there’s a way to do this and a way not to do this. The way to do it is to write what’s known as characterization tests that simply document the behavior of the existing system. The way not to do this is to introduce ‘corrections’ and cleanup as you go. The linked post goes into more detail, but suffice it to say that modifying untested legacy code is like playing Jenga — you never really know ahead of time which brick removal is going to cause an avalanche of problems. That’s why legacy code is so hard to change and so unpleasant to work with. Adding tests is like adding little warnings that say, “dude, not that brick!!!” So while the tower may be faulty and leaning and of shoddy construction, it is standing and you don’t want to go changing things without putting your warning system in place.
So, long story short, don’t modify — just write tests. Even if a method tells you that it adds two integers and what it really does is divide one by the other, just write a passing test for it. Do not ‘fix’ it (that’ll come later when your tests help you understand the system and renaming the method is a more attractive option). Iterate through your code base and do it everywhere you can. If you can instantiate the class to get to the method you want to test and then write asserts about it (bearing in mind the testability problems I’ve covered like GUI, static state, threading, etc), do it. Move on to the next step once you’ve done the easy stuff everywhere. After all, this is easy practice and practice helps.
Go searching for extractable code
Now that you have a pretty good handle on writing testable code as you’re adding it to the code base and getting untested but testable code under test, it’s time to start chipping away at the rest. One of the easiest ways to do this is to hunt down methods in your code base that you can’t test but not because of the contents in them. Here are two examples that come to mind:
public class Untestable1
{
public Untestable1()
{
TestabilityKiller.Instance.DoSomethingHorribleWithGlobalVariables();
}
public int AddTwoNumbers(int x, int y)
{
return x + y;
}
}
public class Untestable2
{
public void PerformSomeBusinessLogic(CustomerOrder order)
{
Console.WriteLine("Total is " + AddTwoNumbers(order.Subtotal, order.Tax));
}
private int AddTwoNumbers(int x, int y)
{
return x + y;
}
}
The first class is untestable because you can’t instantiate it without kicking off global state modification and who knows what else. But the AddTwoNumbers method is imminently testable if you could remove that roadblock. In the second example, the AddTwoNumbers method is testable once again, in theory, but with a roadblock: it’s not public.
In both cases, we have a simple solution: move the method somewhere else. Let’s put it into a class called “BasicArithmeticPerformer” as shown below. I do realize that there are other solutions to make these methods testable, but we’ll talk about them later. And I’ll tell you what I consider to be a terrible solution to one of the testability issues that I’ll talk about now: making the private method public or rigging up your test runner with gimmicks to allow testing of private methods. You’re creating an observer effect with testing when you do this — altering the way the code would look so that you can test it. Don’t compromise your encapsulation design to make things testable. If you find yourself wanting to test what’s going on in private methods, that’s a strong, strong indicator that you’re trying to test the wrong thing or that you have a design flaw.
public class BasicArithmeticPerformer
{
public int AddTwoNumbers(int x, int y)
{
return x + y;
}
}
Now that’s a testable class. So what do the other classes now look like?
public class Untestable1
{
public Untestable1()
{
TestabilityKiller.Instance.DoSomethingHorribleWithGlobalVariables();
}
private int AddTwoNumbers(int x, int y)
{
return new BasicArithmeticPerformer().AddTwoNumbers(x, y);
}
}
public class Untestable2
{
public void PerformSomeBusinessLogic(CustomerOrder order)
{
Console.WriteLine("Total is " + AddTwoNumbers(order.Subtotal, order.Tax));
}
public int AddTwoNumbers(int x, int y)
{
return new BasicArithmeticPerformer().AddTwoNumbers(x, y);
}
}
Yep, it’s that simple. In fact, it has to be that simple. Modifying this untestable legacy code is like walking a high-wire without a safety net, so you have to change as little as possible. Extracting a method to another class is very low risk as far as refactorings go since the most likely problem that could possibly occur (particularly if using an automated tool) is non-compiling. There’s always a risk, but getting legacy code under test is lower risk in the long run than allowing it to continue rotting and the risk of this particular approach is minimal.
On the other side of things, is this a significant win? I would say so. Even ignoring the eliminated duplication, you now have gone from 0 test coverage to 50% in these classes. Test coverage is not a goal in and of itself, but you can now rest a little easier knowing that you have a change warning system in place for half of your code. If someone comes along later and says, “oh, I’ll just change that plus to a minus so that I can ‘reuse’ this method for my purposes,” you’ll have something in place that will throw up a bid red X and say, “hey, you’re breaking things!” And besides, Rome wasn’t built in a day — you’re going to be going through your code base building up a test suite one action like this at a time.
Code that refers to no class fields is easy when it comes to extracting functionality to a safe, testable location. But what if there is instance-level state in the mix? For example…
public class Untestable3
{
int _someField;
public Untestable3()
{
TestabilityKiller.Instance.DoSomethingHorribleWithGlobalVariables();
_someField = TestabilityKiller.Instance.GetSomeGlobalVariableValue();
}
public int AddToGlobal(int x)
{
return x + _someField;
}
}
That’s a little tougher because we can’t just pull _someField into a new, testable class. But what if we made a quick change that got us onto more familiar ground? Such as…
public class Untestable3
{
int _someField;
public Untestable3()
{
TestabilityKiller.Instance.DoSomethingHorribleWithGlobalVariables();
_someField = TestabilityKiller.Instance.GetSomeGlobalVariableValue();
}
public int AddToGlobal(int x)
{
return AddTwoNumbers(x, _someField);
}
private int AddTwoNumbers(int x, int y)
{
return x + y;
}
}
Aha! This looks familiar, and I think we know how to get a testable method out of this thing now. In general, when you have class fields or local variables, those are going to become arguments to methods and/or constructors of the new, testable class that you’re creating and instantiating. Understand going in that the more local variables and class fields you have to deal with, the more of a testing headache the thing you’re extracting is going to be. As you go, you’ll learn to look for code in legacy classes that refers to comparably few local variables and especially fields in the current class as a refactoring target, but this is an acquired knack.
The reason this is not especially trivial is that we’re nibbling here at an idea in static analysis of object oriented programs called “cohesion.” Cohesion, explained informally, is the idea that units of code that you find together belong together. For example, a Car class with an instance field called Engine and three methods, StartEngine(), StopEngine( )and RestartEngine() is highly cohesive. All of its methods operate on its field. A class called Car that has an Engine field and a Dishwasher field and two methods, StartEngine() and EmptyDiswasher() is not cohesive. When you go sniping for testable code that you can move to other classes, what you’re really looking for is low cohesion additions to existing classes. Perhaps some class has a method that refers to no instance variables, meaning you could really put it anywhere. Or, perhaps you find a class with three methods that refer to a single instance variable that none of the other 40 methods in a class refer to because they all use some other fields on the class. Those three methods and the field they use could definitely go in another class that you could make testable.
When refactoring toward testability, non-cohesive code is the low-hanging fruit that you’re looking for. If it seems strange that poorly designed code (and non-cohesive code is a characteristic of poor design) offers ripe refactoring opportunities, we’re just making lemonade out of lemons. The fact that someone slammed unrelated pieces of code together to create a franken-class just means that you’re going to have that much easier of a time pulling them apart where they belong.
Realize that Giant Methods are Begging to be Classes
It’s getting less and less common these days, but do you ever see object-oriented code which you can tell that the author meandered his way over to from writing C back in the one-pass compiler days? If you don’t know what I mean, it’s code that has this sort of form:
public void PerformSomeBusinessLogic(CustomerOrder order)
{
int x, y, z;
double a, b, c;
int counter;
CustomerOrder tempOrder;
int secondLoopCounter;
string output;
string firstTimeInput;
//Alright, now let's get started because this is going to be looooonnnng method...
...
}
C programmers wrote code like this because in old standards of C it was necessary to declare variables right after the opening brace of a scope before you started doing things like assignment and control flow statements. They’ve carried it forward over the years because, well, old habits die hard. Interestingly, they’re actually doing you a favor. Here’s why.
When looking at a method like this, you know you’re in for doozy. If it has this many local variables, it’s going to be long, convoluted and painful. In the C# world, it probably has regions in it that divide up the different responsibilities of the method. This is also a problem, but a lemons-to-lemonade opportunity for us. The reason is that these C-style programmers are actually telling you how to turn their giant, unwieldy method into a class. All of those variables at the top? Those are your class fields. All of those regions (or comments in languages that don’t support regioning)? Method names.
In one of the resources I’ll recommend, “Uncle” Bob Martin said something along the lines of “large methods are where classes go to hide.” What this means is that when you encounter some gigantic method that spans dozens or hundreds of lines, what you really have is something that should be a class. It’s functionality that has grown too big for a method. So what do you do? Well, you create a new class with its local variables as fields, its region names/comments as method titles, and class fields as dependencies, and you delegate the responsibility.
public class Untestable4
{
public void PerformSomeBusinessLogic(CustomerOrder order)
{
var extractedClass = new MaybeTestable();
extractedClass.Region1Title();
extractedClass.Region2Title();
extractedClass.Region3Title();
}
}
public class MaybeTestable
{
int x, y, z;
double a, b, c;
int counter;
CustomerOrder tempOrder;
int secondLoopCounter;
string output;
string firstTimeInput;
public void Region1Title()
{
...
In this example, there are no fields in the untestable class that the method is using, but if there were, one way to handle this is to pass them into the constructor of the extracted class and have them as fields there as well. So, assuming this extraction goes smoothly (and it might not be that easy if the giant method has a lot of temporal coupling, resulting from, say, recycled variables), what is gained here? Well, first of all, you’ve slain a giant method, which will inevitably be good from a design perspective. But what about testability?
In this case, it’s possible that you still won’t have testable methods, but it’s likely that you will. The original gigantic method wasn’t testable. They never are. There’s really way too much going on in them for meaningful testing to occur — too many control flow statements, loops, global variables, file I/O, etc. Giant methods are giant because they do a lot of things, and if you do enough code things you’re going to start running over the bounds of testability. But the new methods are going to be split up and more focused and there’s a good chance that at least one of them will be testable in a meaningful way. Plus, with the extracted class, you have control over the new constructor that you’re creating whereas you didn’t with the legacy class, so you can ensure that the class can at least be instantiated. At the end of the day, you’re improving the design and introducing a seam that you can get at for testing.
Ask for your dependencies — don’t declare them
Another change you can make that may be relatively straightforward is to move dependencies out of the scope of your class — especially icky dependencies. Take a look at the original version of Untestable3 again.
public class Untestable3
{
int _someField;
public Untestable3()
{
TestabilityKiller.Instance.DoSomethingHorribleWithGlobalVariables();
_someField = TestabilityKiller.Instance.GetSomeGlobalVariableValue();
}
public int AddToGlobal(int x)
{
return x + _someField;
}
}
When instantiated, this class goes and rattles some global state cages, doing God-knows-what (icky), and then retrieves something from global state (icky). We want to get a test around the AddToGlobal method, but we can’t instantiate this class. For all we know, to get the value of “someField” the singleton gets the British Prime Minster on the phone and asks him for a random number between 1 and 1000 — and we can’t automate that in a test suite. Now, the earlier option of extracting code is, of course, viable, but we also have the option of punting the offending code out of this class. (This may or may not be practical depending on where and how this class is used, but let’s assume it is). Say there’s only one client of this code:
public class Untestable3Client
{
public void SomeMethod()
{
var untestable = new Untestable3();
untestable.AddToGlobal(12);
}
}
All we really want out of the constructor is a value for “_someField”. All of that stuff with the singleton is just noise. Because of the nature of global variables, we can do the stuff Untestable3’s constructor was doing anywhere. So what about this as an alternative?
public class Untestable3Client
{
public void SomeMethod()
{
TestabilityKiller.Instance.DoSomethingHorribleWithGlobalVariables();
var someField = TestabilityKiller.Instance.GetSomeGlobalVariableValue();
var untestable = new Untestable3(someField);
untestable.AddToGlobal(12);
}
}
public class Untestable3
{
int _someField;
public Untestable3(int someField)
{
_someField = someField;
}
public int AddToGlobal(int x)
{
return x + _someField;
}
}
This new code is going to do the same thing as the old code, but with one important difference: Untestable3 is now a liar. It’s a liar because it’s testable. There’s nothing about global state in there at all. It just takes an integer and stores it, which is no problem to test. You’re an old pro by now at unit testing that’s this easy.
When it comes to testability, the new operator and global state are your enemies. If you have code that makes use of these things, you need to punt. Punt those things out of your code by doing what we did here: executing voids before your constructors/methods are called and asking for things returned from global state or new in your constructors/methods. This is another pretty low-impact way of altering a given class to make it testable, particularly when the only problem is that a class is instantiating untestable classes or reaching out into the global state.
Ruthlessly Eliminate Law of Demeter Violations
If you’re not familiar with the idea, the Law of Demeter, or Principle of Least Knowledge, basically demands that methods refer to as few object instances as possible in order to do their work. You can look at the link for more specifics on what exactly this “law” says, and what exactly is and is not a violation, but the most common form you’ll see is strings of dots (or arrows in C++) where you’re walking an object graph: Property.NestedProperty.NestedNestedProperty.You.Get.The.Idea. (It is worth mentioning that the existence of multiple dots is not always a violation of the Law of Demeter — fluent interfaces in general and Linq in the C# world specifically are counterexamples). It’s when you’re given some object instance and you go picking through its innards to find what you’re looking for.
One of the most immediately memorable ways of thinking about why this is problematic is to consider what happens when you’re at the grocery store buying groceries. When the clerk tells you that the total is $86.28, you swipe your Visa. What you don’t do is wordlessly hand him your wallet. What you definitely don’t do is take off your pants and hand those over so that he can find your wallet. Consider the following code, bearing in mind that example:
public class HardToTest
{
public string PrepareSsnMessage(CustomerOrder order)
{
return "Social Security number is " + order.Customer.PersonalInfo.Ssn;
}
}
The method in this class just prepends an explanatory string to a social security number. So why on earth do I need something called a customer order? That’s crazy — as crazy as handing the store clerk your pants. And from a testing perspective, this is a real headache. In order to test this method, I have to create a customer, then create an order and hand that to the customer, then create a personal info object and hand that to the customer’s order, and then create an SSN and hand that to the customer’s order’s personal info. And that’s if everything goes well. What if one of those classes — say, Customer — invokes a singleton in its constructor. Well, now I can’t test the “PrepareSsnMessage” in HardToTest because the Customer class uses a singleton. That’s absolutely insane.
Let’s try this instead:
public class HardToTest
{
public string PrepareSsnMessage(string ssn)
{
return "Social Security number is " + ssn;
}
}
Ah, now that’s easy to test. And we can test it even if the Customer class is doing weird, untestable things because those things aren’t our problem. What about clients, though? They’re used to passing customer orders in, not SSNs. Well, tough — we’re making this class testable. They know about customer order and they its SSN, so let them incur the Law of Demeter violation and figure out how to clean it up. You can only make your code testable one class at a time. That class and its Law of Demeter violation is tomorrow’s project.
When it comes to testing, the more stuff your code knows about, the more setup and potential problems you have. If you don’t test your code, it’s easy to write train wrecks like the “before” method in this section without really considering the ramifications of what you’re doing. The unit tests force you to think about it — “man, this method is a huge hassle to test because problems in classes I don’t even care about are preventing me from testing!” Guess what. That’s a design smell. Problems in weird classes you don’t care about aren’t just impacting your tests — they’re also impacting your class under test, in production, when things go wrong and when you’re trying to debug.
Understand the significance of polymorphism for testing
I’ll leave off with a segue into the next chapter in the series, which is going to be about a concept called “test doubles.” I will explain that concept then and address a significant barrier that you’re probably starting to bump into in your testing travels. But that isn’t my purpose here. For now I’ll just say that you should understand the attraction of using polymorphic code for testing.
Consider the following code:
public class Customer
{
public string FirstName { get { return TestabilityKiller.Instance.GoGetCustomerFirstNameFromTheDatabase(); } }
}
public class CustomerPropertyFormatter
{
public string PrepareFirstNameMessage(Customer customer)
{
return "Customer first name is " + customer.FirstName;
}
}
Here you have a class, CustomerPropertyFormatter, that should be pretty easy to test. I mean, it just takes a customer and accesses some string property on it for formatting purposes. But when you actually write a test for this, everything goes wrong. You create a customer to give to your method and your test blows up because of singletons and databases and whatnot. You can write a test with a null argument and amend this code to handle null gracefully, but that’s about it.
But, never fear — polymorphism to the rescue. If you make a relatively small modification to the Customer class, you set yourself up nicely. All you have to do is make the FirstName property virtual. Once you’ve done that, here’s a unit test that you can write:
public class DummyCustomer : Customer
{
private string _firstName;
public override string FirstName { get { return _firstName; } }
///
/// Initializes a new instance of the DummyCustomer class.
///
public DummyCustomer(string firstName) { _firstName = firstName; } } [TestMethod, Owner(“ebd”), TestCategory(“Proven”), TestCategory(“Unit”)] public void Adds_Text_To_FirstName() { string firstName = “Erik”; var customer = new DummyCustomer(firstName); var formatter = new CustomerPropertyFormatter(); Assert.IsTrue(formatter.PrepareFirstNameMessage(customer).Contains(firstName)); }
Notice that there is a class, DummyCustomer declared inside of the test class that inherits from the Customer class. DummyCustomer is an example of a test double. You’ll notice that I’ve created a scenario here where I define a version of FirstName that I can control — a benign version, if you will. I effectively bypass that database-singleton thing and create a version of the class that exists only in the test project and allows me to substitute a simple, friendly value that I can test against.
As I said, I’ll dive much more into test doubles next time, but for the time being, understand the power of polymorphism for testability. If the legacy code has methods in it that are hard to use, you can create much more testable situations by the use of interface implementation, inheritance, and the virtual keyword. Conversely, you can make testing a nightmare by using keywords like final and sealed (Java and C# respectively). There are valid reasons to use these, but if you want a testable code base, you should favor liberal support of inheritance and interface implementation.
A Note of Caution
In the sections above, I’ve talked about refactorings that you can do on legacy code bases and mentioned that there is some risk associated with doing so. It is up to you to assess the level of risk of touching your legacy code, but know that any changes you make to legacy code without first instrumenting unit tests can be breaking changes, even small ones guided by automated refactoring tools. There are ways to ‘cheat’ and tips and techniques to get a method under test before you refactor it, such as temporarily making private fields public or local variables into public fields. The Michael Feathers book below talks extensively about these techniques to truly minimize the risk.
The techniques that I’m suggesting here would be ones that I’d typically undertake when requirements changes or bugs were forcing me to make a bunch of changes to the legacy code anyway, and the business understood and was willing to undertake the risk of changing it. I tend to refactor opportunistically like that. What you do is really up to your discretion, but I don’t want to be responsible for you doing some rogue refactoring and torpedoing your production code because you thought it was safe. Changing untested legacy code is never safe, and it’s important for you to understand the risks.
More Information
As mentioned earlier, here are some excellent resources for more information on working with and testing legacy code bases:
- Working Effectively with Legacy Code by Michael Feathers
- Clean Code by Robert (“Uncle Bob”) Martin
- Clean Coders video series, by Robert Martin
- The Art of Unit Testing by Roy Osherove (I have not personally read this, but I respect his work that I’m familiar with and have seen it recommended)
And, of course, you can check out my book about unit testing: Starting to Unit Test, Not as Hard as You Think.
The AddTwoNumbers method in the refactored Untestable2 should still be private, right?
P.S. Nice Braveheart picture. We have a similar one in the office subtitled “For agile!”
Good catch, thanks. I’ve fixed it. And I dig the Braveheart agile poster.
Thank you for this series. I’ve spent six-plus years coding in your bowling alley, or very near to it. In the meantime, I’ve been evangelized about unit testing and TDD by people outside those shops. I’m on board, and want to get started, but I lacked something like this series to help me get my feet wet. I own the Feathers book, and I’m convinced that, for someone who has experience, that book is a great reference and resource. But getting from the tools Feathers discusses to working code is a pretty big leap for my level of experience. Thanks… Read more »
I think you’re spot on about the Michael Feathers book, for what it’s worth. I own it and read it when already proficient in unit testing. I found it quite valuable and an interesting read, but I don’t imagine that the target audience is people with no testing experience, since I don’t seem to recall that it walks you through the basics or anything like that. I’m glad if you’re finding the series helpful. I suspect that some of the least frequently covered material is “okay, you know how to write an Assert statement, but how do we actually make… Read more »
Great post as usual 🙂
I just want to underline the importance of instrumenting/profiling you code before and after it has been refactored. A simple refactored code in a CPU intense part can make large and unwanted impact.
Some tips:
1. Write tests
2. Profile it
3. Refactor it
4. Profile it again.
5. Compare
Did the CPU load and/or memory usage increased? If so, check what optimization flags are enabled. You might have missed something or it might be so easy that the price for a “cleaner” code right here isn’t worth the cost.
BR
// Pierre
That’s a good point. I hadn’t thought about concerns beyond correctness in the vein of this series of posts (aimed at unit test novices), but general runtime metrics for performance certainly make a lot of sense.
[…] the last two posts in this series, I talked about how to test new code in your code base and then how to bring your legacy code under test. Toward the end of the last chapter in this series, I talked a bit about the concept of test […]
[…] • Модульные тесты: несколько полезных советов и замечаний. […]
[…] • Модульные тесты: несколько полезных советов и замечаний. […]
This post changes my perspective. I hate working with ugly, untestable legacy code. I feel like every day I spend on it is a day I don’t get to spend doing something better. But the world is full of legacy code. Being able to improve it is a valuable skill. I’d rather not work on it if there’s no intent to improve it. But making legacy code testable is valuable and could even be rewarding. This was the biggest eye-opener, maybe even a game-changer, for me: I can move functionality from a private method into a new class and depend… Read more »
Thanks for the feedback! I’m glad if you found it helpful. And, for what it’s worth, I actually find it pretty satisfying to fix up and wrangle legacy code. But you’re absolutely right in that having the freedom to change it makes all the difference. I wouldn’t sign on for a project where my instructions were to keep it working and do as little with it as possible.