DaedTech

Stories about Software

By

Addicted to Unit Testing

Something interesting occurred to me the other day when I posted sample code for a DXCore plugin that I created. In the code that I uploaded, I added a unit test project with a few unit tests as a matter of course. Apparently, the process of unit testing has become so ingrained in me that I didn’t think anything of it until later. This caused me to reflect a bit on my relationship, as a developer, to unit testing.

I’ve worked in settings where unit tests have been everything from mandated to tolerated to scoffed at and discouraged. And I’ve found that I do some form of unit testing in all of these environments. In environments that require unit testing, I simply include them the way everyone else does, conforming to standards. In environments that discourage unit testing, I implement them and keep them to myself. In that case, the tests aren’t always conventional or pretty, but I always find some way to automate verification. To me, this is in my blood as a programmer. I want to automate everything and I don’t see why verifying my code’s functionality should be any different.

But I realize that it’s now something beyond just a desire to automate, tinker, and take pride in my work. The title of this post is tongue-in-cheek, but also appropriate. When I write code and don’t do some verification of it, I start to feel edgy and off of my game. It’s hard for me to understand how people can function without unit testing their code. How do they know it works? How do they know they’ve handled edge cases and bad input to methods? How do they feel comfortable building new classes that depend on the correct functioning of the ones that came first? And, most importantly, how do they know they’re not running in place when adding functionality — breaking an existing requirement for each new one they satisfy?

I exhibit some of the classic signs of an addict. I become paranoid and discombobulated without the thing upon which I depend. I’ll go to various lengths to test my code, even if it’s not factored into the implementation time (work longer hours, create external structure, find free tools, etc.). I reach for it without really thinking about it, as evidenced by the unit tests in my uploaded code.

But I suppose the metaphor ends there. Because unlike the vices about which recovering addicts might speak this way — drugs, alcohol, gambling, etc. — I believe this ‘addiction’ makes me better as a software engineer. I think most people would agree that tested code is likely to be better code, and strong unit test proponents would probably argue that the habit makes you write better code whether or not you unit test a given class at a given time. For example, when I create a dependency-injected class, the first code I tend to write, automatically and out of habit, is an “if” statement in the constructor that checks the injected references for null and throws an exception if they are. I write this because the first unit test that I write is one to check how the class behaves when injected with null.

And, to me, that’s really the most core benefit of unit testing. Sure, it makes refactoring easier, it enumerates your requirements better than a requirements analysis document could, it inspires confidence in the class behavior, and all of the other classic properties of unit testing as a process stalwart. But I think that, at the core, it changes the way you think about classes and how you write code. You write code knowing that it should behave in a certain way with certain inputs and that it should provide certain outputs. You think in terms of what properties your classes have and what they should initialize to in different instantiation scenarios. You think of your classes as units, and isn’t that the ultimate goal — neat, decoupled code?

One common theme that I see in code and I’ve come to think of as highly indicative of a non-unit testing mentality is a collection of classes that distribute their functionality in ad-hoc fashion. That is, some new requirement comes in and somebody writes a class to fulfill it — call it “Foo.” Then a few more requirement riders trickle in as follow ups, and since “Foo” is the thing that satisfies the original, it should also satisfy the new ones. It’s perfectly fine to call it “Foo” because it represents a series of user requests and not a conceptual object. Some time later, more requirements come in, and suddenly the developer needs two different Foos to handle two different scenarios. Since Foo is getting kind of large, and large classes are bad, the solution is to create a “FooManager” that knows all about the internals of “Foo” and to spread functionality across both. If FooManager needs internal Foo field “_bar”, “_bar” is made into a property “Bar” (C#) or an accessor “GetBar()” (Java/C++), and the logic proceeds. Foo does some things to its former private member “Bar” and then “FooManager” also does some things to Foo’s Bar, and before you know it, you have a Gordian knot of functional and temporal coupling.

I don’t think this kind of code would ever exist in a world populated only by unit testing addicts. Unit testing addiction forces one to consider upfront a class’s reason for existing and carefully deliberate its public interface. The unit testing addict would not find himself in this precarious situation. His addiction would save him from this “rock bottom” of software development.

By

Incorporating MS Test unit tests without TFS

My impression (and somebody please correct me if I’m wrong) is that MS Test is really designed to operate in conjunction with Microsoft’s Team Foundation Server (TFS) and MS Build. That is, opting to use MS Test for unit testing when you’re using something else for version control and builds is sort of like purchasing one iProduct when the rest of your computer-related electronics are PC or Linux oriented: you can do what you like with it, provided you do enough tinkering, but in general, the experience is not optimal.

As such, I’m posting here the sum of that tinkering that I think I have been able to parlay into an effective build process. I am operating in an environment where unit test framework, version control, and build technology are already in place, and mine is only to create a policy to make them work together. So, feedback along the lines of “you should use NUnit” is appreciated, but only because I appreciate anyone taking the time to read the blog: it won’t actually be helpful or necessary in this circumstance. MS Test is neither my choice, nor do I particularly like it, but it gets the job done and it isn’t going anywhere at this time.

So, onto the helpful part. Since I’m not using MS Build and I’m not using TFS, I’m more or less restricted to running the unit tests in two modes: through Visual Studio or from the command line using MSTest.exe. If there is way to have a non-MS Build tool use Visual Studio’s IDE, I am unaware of it, and, if it did exist, I’d probably be somewhat skeptical (but then again, I’m a dyed-in-the-wool command line junkie, so I’m not exactly objective).

As such, I figured that the command line was the best way to go and looked up the command line options for MS Test. Of particular relevancy to the process I’m laying out here are the testcontainer, category, and resultsfile switches. I also use the nologo switch, but that seems something of a given since there’s really no reason for a headless build machine to be advertising for Microsoft.

Testcontainer allows specification of a test project DLL to use. Resultsfile allows specification of a file to which the results can be dumped in xml format (so my advice is to append .xml to the end). And the most interesting one, category, allows you to filter the tests based on some meta-data that is defined in the attribute header of the test itself. In my instance, I’m using three possible categories to describe tests: proven, unit, and integration.

The default when you create a test in Microsoft using, say, the Visual Studio code snippet “testc” (type “testc” and then hit tab) is the following:

[TestMethod]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    Assert.IsTrue(true);
}

Excusing the horrendous practice of testing whether true is true, you’ll notice that the attribute tags are empty. This is what we want because this test has not yet been promoted to be included with the build. The first thing that I’ll do is add a tag to it for “Owner” because I believe that it’s good practice to sign unit tests, thus allowing everyone to see who is the owner of a failing unit test, as a good contact point for investigating why the test is broken. This is done as follows:

[TestMethod, Owner("SomeoneOtherThanErik")]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    Assert.IsTrue(true);
}

I’ve signed this one as somebody else because I’m not putting my name on it. But, when you’re not kidding around or sandbagging someone for your badly written test, you probably want to include your actual name.

The next step is the important one in which you assign the test a category or multiple categories, as applicable. In my scenario, we can pick from “unit,” “integration,” and “proven.” “Unit” is assigned to tests that actually exercise only the class under test. “Integration” is assigned to tests that test the interaction between two or more classes. “Proven” means that you’re confident that if the test is broken, it’s because the SUT is broken and not just that the test is poorly written. So, I might have the following:

[TestMethod, Owner("SomeoneOtherThanErik")]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    Assert.IsTrue(true);
}

[TestMethod, Owner("Erik"), TestCategory("Proven"), TestCategory("Unit")]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    var myCut = new Foo();
    Assert.IsTrue(myFoo.DoesSomething());
}

[TestMethod, Owner("Erik"), TestCategory("Integration")]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    Dictionary myDictionary = new Dictionary();
    var myBar = new Bar(myDictionary);
    var myFoo = new Foo(myDictionary);

    myBar.AddElementsToDict();
    myFoo.AddElementsToDict();

    Assert.IsTrue(myDictionary.Count > 1);
}

[TestMethod, Owner("Erik"), Category("Integration")]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    Foo myFoo = new Foo();
    myFoo.SetGlobalVariables(SomeSingleton.Instance);
    Assert.IsTrue(myFoo.IsGlobalSet);
}

Now, looking at this set of tests, you’ll notice a couple of proven tests: one integration, one unit, and a test that is missing the label “Proven.” With the last test, we leave off the label proven because the class under test has global state and is thus going to be unpredictable and hard to test. Also, with that one, I’ve labeled it integration instead of unit because I consider anything referencing global state to be integration by definition. (Also, as an aside, I would not personally introduce global, static state into a system, nor would I prefer to test classes in which it exists, but as anyone knows, not all of the code that we have to deal with reflects our design choices or is our own creation.)

Now, for the build process itself, I’ve created the following batch script:

@echo off
rem Execute unit tests from the command line
rem By Erik


rem Change this if you want the result stashed in a directory besides ./
set resultsdir=TestResults rem Change this if you want to test different categories set testset=Unit&Proven rem Change this if you want to change the output test name extension (default is Results.xml -- i.e. CustomControlsTestResults.xml) set resultsextensions=Results.xml rem If results dir does not exist, create it if exist %resultsdir% goto direxists mkdir %resultsdir% :direxists rem This allows using the 'file' variable in the for loop setlocal EnableDelayedExpansion rem This is the actual execution of the test run. Delete old results files, and then execute the MSTest exe for /f %%f in ('dir /b *Test.dll') do ( set testfile=%%f set resultsfile=!resultsdir!\!testfile:~0,-4!%resultsextensions% echo Testing !testfile! -- output to !resultsfile! del !resultsfile! "C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\MSTest.exe" /nologo /testcontainer:!testfile! /category:"%testset%" /resultsfile:!resultsfile! ) echo Unit test run complete.

What this does is iterate through the current directory, looking for files that end in Test.dll. As such, it should either be modified or placed in the directory to which all unit test projects are deployed as part of the build. For each Test project that it finds, it runs MS Test, applies the category filter from the top, and dumps the results in a file named Results.xml. In this case, it will run all tests categorized as “Unit” and “Proven.” However, this can be easily modified by changing the filter parameter per the MSTest.exe specifications for the /category command line parameter (supports operations logic and/or/not).

So, from here, incorporating the unit tests into the build will depend to some degree on the nature of your build technology, but it will probably be as simple as parsing the command output from the batch script, parsing the results.xml file, or simply getting a return parameter from the MS Test executable. Some tools may even implicitly know how to handle this.

As I see it, this offers some nice perks. It is possible to allow unit tests to remain in the build even if they have not been perfected yet, and they need not be marked as “Inconclusive” or commented out. In addition, it is possible to have more nuanced build steps where, say, the unit tests are run daily by the build machine, but unproven tests only weekly. And in the event of some refactoring or changes, unit tests that are broken because of requirements changes can be “demoted” from the build until such time as they can be repaired.

I’m sure that some inventive souls can take this further and do even cooler things with it. As I refine this process, I may revisit it in subsequent posts as well.