DaedTech

Stories about Software

By

Incorporating MS Test unit tests without TFS

My impression (and somebody please correct me if I’m wrong) is that MS Test is really designed to operate in conjunction with Microsoft’s Team Foundation Server (TFS) and MS Build. That is, opting to use MS Test for unit testing when you’re using something else for version control and builds is sort of like purchasing one iProduct when the rest of your computer-related electronics are PC or Linux oriented: you can do what you like with it, provided you do enough tinkering, but in general, the experience is not optimal.

As such, I’m posting here the sum of that tinkering that I think I have been able to parlay into an effective build process. I am operating in an environment where unit test framework, version control, and build technology are already in place, and mine is only to create a policy to make them work together. So, feedback along the lines of “you should use NUnit” is appreciated, but only because I appreciate anyone taking the time to read the blog: it won’t actually be helpful or necessary in this circumstance. MS Test is neither my choice, nor do I particularly like it, but it gets the job done and it isn’t going anywhere at this time.

So, onto the helpful part. Since I’m not using MS Build and I’m not using TFS, I’m more or less restricted to running the unit tests in two modes: through Visual Studio or from the command line using MSTest.exe. If there is way to have a non-MS Build tool use Visual Studio’s IDE, I am unaware of it, and, if it did exist, I’d probably be somewhat skeptical (but then again, I’m a dyed-in-the-wool command line junkie, so I’m not exactly objective).

As such, I figured that the command line was the best way to go and looked up the command line options for MS Test. Of particular relevancy to the process I’m laying out here are the testcontainer, category, and resultsfile switches. I also use the nologo switch, but that seems something of a given since there’s really no reason for a headless build machine to be advertising for Microsoft.

Testcontainer allows specification of a test project DLL to use. Resultsfile allows specification of a file to which the results can be dumped in xml format (so my advice is to append .xml to the end). And the most interesting one, category, allows you to filter the tests based on some meta-data that is defined in the attribute header of the test itself. In my instance, I’m using three possible categories to describe tests: proven, unit, and integration.

The default when you create a test in Microsoft using, say, the Visual Studio code snippet “testc” (type “testc” and then hit tab) is the following:

[TestMethod]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    Assert.IsTrue(true);
}

Excusing the horrendous practice of testing whether true is true, you’ll notice that the attribute tags are empty. This is what we want because this test has not yet been promoted to be included with the build. The first thing that I’ll do is add a tag to it for “Owner” because I believe that it’s good practice to sign unit tests, thus allowing everyone to see who is the owner of a failing unit test, as a good contact point for investigating why the test is broken. This is done as follows:

[TestMethod, Owner("SomeoneOtherThanErik")]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    Assert.IsTrue(true);
}

I’ve signed this one as somebody else because I’m not putting my name on it. But, when you’re not kidding around or sandbagging someone for your badly written test, you probably want to include your actual name.

The next step is the important one in which you assign the test a category or multiple categories, as applicable. In my scenario, we can pick from “unit,” “integration,” and “proven.” “Unit” is assigned to tests that actually exercise only the class under test. “Integration” is assigned to tests that test the interaction between two or more classes. “Proven” means that you’re confident that if the test is broken, it’s because the SUT is broken and not just that the test is poorly written. So, I might have the following:

[TestMethod, Owner("SomeoneOtherThanErik")]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    Assert.IsTrue(true);
}

[TestMethod, Owner("Erik"), TestCategory("Proven"), TestCategory("Unit")]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    var myCut = new Foo();
    Assert.IsTrue(myFoo.DoesSomething());
}

[TestMethod, Owner("Erik"), TestCategory("Integration")]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    Dictionary myDictionary = new Dictionary();
    var myBar = new Bar(myDictionary);
    var myFoo = new Foo(myDictionary);

    myBar.AddElementsToDict();
    myFoo.AddElementsToDict();

    Assert.IsTrue(myDictionary.Count > 1);
}

[TestMethod, Owner("Erik"), Category("Integration")]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    Foo myFoo = new Foo();
    myFoo.SetGlobalVariables(SomeSingleton.Instance);
    Assert.IsTrue(myFoo.IsGlobalSet);
}

Now, looking at this set of tests, you’ll notice a couple of proven tests: one integration, one unit, and a test that is missing the label “Proven.” With the last test, we leave off the label proven because the class under test has global state and is thus going to be unpredictable and hard to test. Also, with that one, I’ve labeled it integration instead of unit because I consider anything referencing global state to be integration by definition. (Also, as an aside, I would not personally introduce global, static state into a system, nor would I prefer to test classes in which it exists, but as anyone knows, not all of the code that we have to deal with reflects our design choices or is our own creation.)

Now, for the build process itself, I’ve created the following batch script:

@echo off
rem Execute unit tests from the command line
rem By Erik


rem Change this if you want the result stashed in a directory besides ./
set resultsdir=TestResults rem Change this if you want to test different categories set testset=Unit&Proven rem Change this if you want to change the output test name extension (default is Results.xml -- i.e. CustomControlsTestResults.xml) set resultsextensions=Results.xml rem If results dir does not exist, create it if exist %resultsdir% goto direxists mkdir %resultsdir% :direxists rem This allows using the 'file' variable in the for loop setlocal EnableDelayedExpansion rem This is the actual execution of the test run. Delete old results files, and then execute the MSTest exe for /f %%f in ('dir /b *Test.dll') do ( set testfile=%%f set resultsfile=!resultsdir!\!testfile:~0,-4!%resultsextensions% echo Testing !testfile! -- output to !resultsfile! del !resultsfile! "C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\MSTest.exe" /nologo /testcontainer:!testfile! /category:"%testset%" /resultsfile:!resultsfile! ) echo Unit test run complete.

What this does is iterate through the current directory, looking for files that end in Test.dll. As such, it should either be modified or placed in the directory to which all unit test projects are deployed as part of the build. For each Test project that it finds, it runs MS Test, applies the category filter from the top, and dumps the results in a file named Results.xml. In this case, it will run all tests categorized as “Unit” and “Proven.” However, this can be easily modified by changing the filter parameter per the MSTest.exe specifications for the /category command line parameter (supports operations logic and/or/not).

So, from here, incorporating the unit tests into the build will depend to some degree on the nature of your build technology, but it will probably be as simple as parsing the command output from the batch script, parsing the results.xml file, or simply getting a return parameter from the MS Test executable. Some tools may even implicitly know how to handle this.

As I see it, this offers some nice perks. It is possible to allow unit tests to remain in the build even if they have not been perfected yet, and they need not be marked as “Inconclusive” or commented out. In addition, it is possible to have more nuanced build steps where, say, the unit tests are run daily by the build machine, but unproven tests only weekly. And in the event of some refactoring or changes, unit tests that are broken because of requirements changes can be “demoted” from the build until such time as they can be repaired.

I’m sure that some inventive souls can take this further and do even cooler things with it. As I refine this process, I may revisit it in subsequent posts as well.