DaedTech

Stories about Software

By

TDD Even with DateTime.Now

Recently, I posted my incredulity at the idea that someone would go to the effort of writing unit tests and not source control them as a matter of course. I find that claim as absurd today as I did then, but I did happen on an interesting edge case where I purposely discarded a unit test I wrote during TDD that I would otherwise have kept.

I was writing a method that would take a year in integer form and populate a drop down list with all of the years starting with that one up through the current year. In this project, I don’t have access to an isolation framework like Moles or Typemock, so I have no way of making DateTime.Now return some canned value (check out this SO post for an involved discussion of the sensitivity of unit tests involving DateTime.Now).

So, as I thought about what I wanted this method to do, and how to get there, I did something interesting. I wrote the following test:

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Adds_Item_When_Passed_2012()
{
    var myFiller = new CalendarDropdownFiller(new DateTimeFormatInfo());
    var myList = new DropDownList();
    myFiller.SeedWithYearsSince(myList, 2012);

    Assert.AreEqual(1, myList.Items.Count);
}

To get this to pass, I changed the method SeedWithYearsSince() to add a random item to the list. Next test I wrote was:

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Adds_Item_With_Text_2012_When_Passed_2012()
{
    var myFiller = new CalendarDropdownFiller(new DateTimeFormatInfo());
    var myList = new DropDownList();
    myFiller.SeedWithYearsSince(myList, 2012);

    Assert.AreEqual("2012", myList.Items[0].Value);
}

Now, I had to actually add “2012” in the method, but it was still pretty obtuse. To get serious, I wrote the following test:

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Adds_Two_Items_When_Passed_2011()
{
    var myFiller = new CalendarDropdownFiller(new DateTimeFormatInfo());
    var myList = new DropDownList();
    myFiller.SeedWithYearsSince(myList, 2011);

    Assert.AreEqual(2, myList.Items.Count);
}

Now the method had to do something smart, so I wrote:

public virtual void SeedWithYearsSince(DropDownList list, int year)
{
    for (int index = year; index <= DateTime.Now.Year; index++)
        list.Items.Add(new ListItem(index.ToString()));
}

And, via TDD, I got to the gist of my method correctly. (I would later write tests that passed in a null list and a negative year and test that descriptive exceptions were thrown, but this is more or less the finished product). But now, let's think about the unit tests vis a vis source control.

Of the three tests I've written, the first two should always pass unless I get around to finishing the time machine that I started building a few years back. We might consolidate those into a single test that's a little more meaningful, perhaps by dropping the first one. We might also tease out a few more cases here to guard against regressions, say proving that calling it with 2010 adds 2010, 2011 and 2012 or something. While I don't generally feel good about checking in tests that exercise code dependent on external state (like "Now"), we can feel pretty good about these given the nature of "Now".

But that last test about 2 items when passed 2011 is only good for the remainder of 2012. When you wake up bright and early on New Year's morning and run to the office and kick off a test run, this test will fail. Clearly we don't want to check that test in, so all things being equal, we'll discard it. That's a bummer, but it's okay. The point of the unit tests written here was a design strategy -- test driven development. If we can't keep the artifacts of that because, say, we don't have access to an isolation framework or permission ot use one, it's unfortunate, but c'est la vie. We'll check in the tests that we can and call it a day.

This same reasoning applies within the context of whatever restrictions are placed on you. Say you are assigned to a legacy codebase (using the Michael Feathers definition of "legacy" as code without unit tests) and do not have rights to add a test project, for whatever reason. Well, then write them to help you work, keep them around as best you can to help for as long as you can, and discard them when you have to. If you have a test project but not Moles or Typemock, you do what we did here. If you have code that you have to use that lacks seams, contains things like singeltons/static methods or otherwise presents testability problems, take the same approach. Better to test during TDD and discard then not to test at all since you can at least guard against regressions and get fast feedback during initial development.

I've often heard people emphasize that TDD is a development methodology first and the unit tests for source control are a nice ancillary benefit. But I think the example of DateTime.Now really drives home that point. The fact that DateTime.Now (or legacy code, GUI code, threaded code, etc) is fickle and hard to test need not be a blocker from doing TDD. Clearly I think we should strive to write only meaningful tests and to keep them all around, but this isn't an all or nothing proposition. Make sure you're verifying your code first and foremost, preserve what you can, and seek to improve through increasingly decoupled code, better tooling, and more practice writing good tests.

By

Mock.Of() and Mock.Get() in Moq

Today, I’d like to highlight a couple of features of Moq that I didn’t know about until relatively recently (thanks to a recent google+ hangout with Moq author, Daniel Cazzulino). Since learning about these features, I’ve been getting a lot of mileage out of them. But, in order to explain these two features and the different paradigm they represent, let me reference my normal use of Moq.

Let’s say we have some class PencilSharpener that takes an IPencil and sharpens it, and we want to verify that this is accomplished by setting the Pencil’s length and sharpness properties:

public void Sharpen_Sets_IsSharp_To_True()
{
    var myPencilDouble = new Mock();
    myPencilDouble.SetupProperty(pencil => pencil.IsSharp);
    myPencilDouble.Object.IsSharp = false;
    myPencilDouble.SetupProperty(pencil => pencil.Length);
    myPencilDouble.Object.Length = 12;

    var mySharpener = new PencilSharpener();
    mySharpener.Sharpen(myPencilDouble.Object);

    Assert.IsTrue(myPencilDouble.Object.IsSharp);
}

So, I create a test double for the pencil, and I do some setup on it, and then I pass it into my sharpener, after which I verify that the sharpener mutates it in an expected way. Fairly straight forward. I create the double and then I manipulate its setup, before passing its object in to my class under test. (Incidentally, I realize that I could call “SetupAllProperties()”, but I’m not doing that for illustrative purposes).

But, sometimes I’d rather not think of the test double as a double, but just some object that I’m passing in. That is, perhaps I don’t need to invoke any setup on it, and I just want to reason about the actual proxy implementation, rather than stub.object. Well, that’s where Mock.Of<>() comes in:

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Sharpen_Sets_IsSharp_To_True()
{
    var myPencil = Mock.Of();
    myPencil.IsSharp = false;

    var mySharpener = new PencilSharpener();
    mySharpener.Sharpen(myPencil);

    Assert.IsTrue(myPencil.IsSharp);
}

Much cleaner, eh? I never knew I could do this, and I love it. In many tests now, I can reason about the object not as a Mock, but as a T, which is an enormous boost to readability when extensive setup is not required.

Ah, but Erik, what if you get buyer’s remorse? What if you have some test that starts off simple and then over time and some production cycles, you find that you need to verify it, or do some setup. What if we have the test above, but the Sharpen() method of PencilSharpener suddenly makes a call to a new CanBeSharpened() method on IPencil that must suddenly return true… do we need to scrap this approach and go back to the old way? Well, no, as it turns out:

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Sharpen_Sets_IsSharp_To_True()
{
    var myPencil = Mock.Of();
    myPencil.IsSharp = false;
    Mock.Get(myPencil).Setup(pencil => pencil.CanBeSharpened()).Returns(true);

    var mySharpener = new PencilSharpener();
    mySharpener.Sharpen(myPencil);

    Assert.IsTrue(myPencil.IsSharp);
}

Notice the third line in this test. Mock.Get() takes some T and grabs the Mock containing it for you, if applicable (you’ll get runtime exceptions if you try this on something that isn’t a Mock’s object). So, if you want to stay in the context of creating a T, but you need to “cheat”, this gives you that ability.

The reason I find this so helpful is that I tend to pick one of these modes of thinking and stick with it for the duration of the test. If I’m creating a true mock with the framework — an elaborate test double with lots of customized returns and callbacks and events — I prefer to instantiate a new Mock(). If, on the other hand, the test double is relatively lightweight, I prefer to think of it simply as a T, even if I do need to “cheat” and make the odd setup or verify call on it. I find that this distinction aids a lot in readability, and I’m taking full advantage. I realize that one could simply retain a reference to the Mock and another to the T, but I’m not really much of a fan (though I’m sure I do it now and again). The problem with that, as I see it, is that you’re maintaining two levels of abstraction simultaneously, which is awkward and tends to be confusing for maintainers (or you, later).

Anyway, I hope that some of you will find this as useful as I did.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

Using Moles with the System Assembly

Short but sweet post tonight, and I’m mostly putting this here for reference. It’s annoyed me a few times, and each time I have to google around until I find something like this stack overflow post.

If you add a moles assembly for System, unlike other framework assemblies, building your test project will blow up spectacularly with too many compiler errors to count. I don’t know all the whys and wherefores, but I do know that it’s easily remedied as hveiras says. You simply change:


  

to


  

in your System.Moles file that’s generated. That’s it.

Philosophically, it seems like a bit of an oversight that this would be necessary, as this never fails to fail out of the box. There may be something I’m missing, but it seems like it’d be nice if this attribute were added by default for the System Assembly. In any case, I (and you) have it here for posterity now, though probably the act of explicitly typing it out has rendered it moot for me, as documenting things usually makes me remember them and not need my documentation… which reminds me – I’m probably about due for a periodic re-reading of one of my all time favorite books.

By

MS Test Report Generator v1.1

In the comments of this post, Kym pointed out that I had been remiss in providing instructions, so I’ve now corrected that state of affairs. I’ve uploaded v1.1 to SourceForge, and the zip file includes an MSI installer as well as PDF and DOCX formats of instructions.

As I alluded to in the comments of my previous post, I got a little carried away once I started on the instructions. I made a few minor modifications to the logic for generating reports, and added TRX as an additional, acceptable extension. That is the default extension for MS Test results files, but during the course of my build process, the files get renamed to .xml, so I didn’t have the .trx whitelisted in my validation routines. But now, I do.

The biggest change, though, is the addition of an interactive GUI, pictured here from SourceForge:

MS Test Report Generator

As I was adding instructions, it occurred to me that most people wouldn’t be following my build process, so I added a WPF project to my existing solution and set about making a GUI for this over the last week in my spare time. My over-engineered, layered implementation actually came in handy because this required very little in the way of changes to the underlying architecture. I just had to add logic to it for finding solutions and test files on disk, and the rest was sufficient.

The GUI isn’t perfect, but it’s pretty functional. If people like it, I’ll be happy to add to it (or you’re welcome to snag the source and contribute yourself if you want–it’s still available from the subversion repository). The gist is that you pick a solution and the tool will show you test runs for that solution. You then choose a test run and generate a report for it.

As I became my first user, I decided that it’d be interesting to have this open while developing, and that prompted me to add a “refresh” button so that I could do a test run, refresh, and generate a report for my latest run. In this fashion, I actually spent a few days developing where I’d view all my test results with this tool instead of the Visual Studio “Test Results” window.

There is plenty of room for extension, which I’ll do if people are interested. I could poll the filesystem for new tests instead of a manual refresh, for instance. But incremental baby steps. This is one of a bunch of pet projects that I have, so it’s not exactly as if I’m flush with time to get crazy with it. I’ll let the response level guide how much work I put in.

In that vein, if you like it or even if you don’t like it, but want me to do something to make you like it, send and email or post a comment here or on SourceForge! Aside from monitoring the SourceForge download content (and who knows what that really says about the project), that’s my only way of getting feedback.

By

MS Test Report Generator

I’ve been working on a side project for a while, and today I uploaded it to Source Forge. The project is a tool that takes XML results generated by MS Test runs and turns them into HTML-based reports. I believe that TFS will do this for you if you have fully Microsoft-integrated everything, but, in the environment I’m currently working in, I don’t.

So I created this guy.

Back Story

I was working on a build process with a number of constraints that predated my tenure on the process itself. My task was to integrate unit tests into the build and have the build fail if the unit tests were not all in a passing state. The unit test harness was MS Test and the build tool was Final Builder.

During the course of this process, some unit tests had been in a failing state for some period of time. Many of these were written by people making a good faith effort but nibbling at an unfortunate amount of global state and singletons. These tests fail erratically and some of the developers that wrote them fell out of the habit of running the tests, presumably due to frustration. I created a scheme where the build would only run tests decorated with the “TestType” attribute “Proven”. In this fashion, people could write unit tests, ‘promote’ them to be part of the build, and have it as an opt-in situation. My reasoning here is that I didn’t want to deter people who were on the fence about testing by having them be responsible for failing the build because they didn’t know exactly what they were doing.

After poking around some, I saw that there was no native final builder action that accomplished what I wanted– to execute a certain subset of tests and display a report. So I created my own batch scripts (not included in the project) that would execute the MS Test command line executable with filtering parameters. This produces an XML based output. I scan that output for the run result and, if it isn’t equal to “Passed”, I fail the build. From there, I generate a report using my custom utility so that people can see stats about the tests.

Report Screenshot

Design

During the course of my spare time, I decided to play around with some architectural concepts and new goodies added to C# in the .NET 4.0 release. Specifically, I added some default parameters and experimented with co-variance and contra-variance. In terms of architecture, I experimented with adding the IRepository pattern to an existing tiered architecture.

On the whole, the design is extensible, flexible, and highly modular. It covered about 99.7% by unit tests and, consequently, was complete overkill for a little command line utility designed to generating an HTML report. However, I was thinking bigger. Early on, I decided I wanted to put this on Source Forge. The reason I designed it the way I did was to allow for expansion of the utility into a GUI-driven application that can jack into a server database and maintain aggregate unit testing statistics on a project. Over the course of time, you can track and get reports on things like test passing rate, test addition rate, which developers write tests, etc. For that, the architecture of the application is very well suited.

The various testing domain objects are read into memory, and the XML test file and HTML output file are treated as just another kind of persistence. So in order to adapt the application in its current incarnation to, say, write the run results to a MySQL or SQL server database, it would only be necessary to add a few classes and modify main to persist the results.

Whether I actually do this or not is still up in there air, and may depend upon how much, if any, interest it generates on Source Forge. If people would find it useful, both in general and specifically on projects I work on, I’m almost certain to do it. If no one cares, I have a lot of projects that I work on.

How to Access

You can download a zipped MSI installer from SourceForge.

If you want to look at the source code, you can do a checkout/export on the following SVN address: https://mstestreportgen.svn.sourceforge.net/svnroot/mstestreportgen

That will provide read-only access to the source.