DaedTech

Stories about Software

By

MS Test Report Generator

I’ve been working on a side project for a while, and today I uploaded it to Source Forge. The project is a tool that takes XML results generated by MS Test runs and turns them into HTML-based reports. I believe that TFS will do this for you if you have fully Microsoft-integrated everything, but, in the environment I’m currently working in, I don’t.

So I created this guy.

Back Story

I was working on a build process with a number of constraints that predated my tenure on the process itself. My task was to integrate unit tests into the build and have the build fail if the unit tests were not all in a passing state. The unit test harness was MS Test and the build tool was Final Builder.

During the course of this process, some unit tests had been in a failing state for some period of time. Many of these were written by people making a good faith effort but nibbling at an unfortunate amount of global state and singletons. These tests fail erratically and some of the developers that wrote them fell out of the habit of running the tests, presumably due to frustration. I created a scheme where the build would only run tests decorated with the “TestType” attribute “Proven”. In this fashion, people could write unit tests, ‘promote’ them to be part of the build, and have it as an opt-in situation. My reasoning here is that I didn’t want to deter people who were on the fence about testing by having them be responsible for failing the build because they didn’t know exactly what they were doing.

After poking around some, I saw that there was no native final builder action that accomplished what I wanted– to execute a certain subset of tests and display a report. So I created my own batch scripts (not included in the project) that would execute the MS Test command line executable with filtering parameters. This produces an XML based output. I scan that output for the run result and, if it isn’t equal to “Passed”, I fail the build. From there, I generate a report using my custom utility so that people can see stats about the tests.

Report Screenshot

Design

During the course of my spare time, I decided to play around with some architectural concepts and new goodies added to C# in the .NET 4.0 release. Specifically, I added some default parameters and experimented with co-variance and contra-variance. In terms of architecture, I experimented with adding the IRepository pattern to an existing tiered architecture.

On the whole, the design is extensible, flexible, and highly modular. It covered about 99.7% by unit tests and, consequently, was complete overkill for a little command line utility designed to generating an HTML report. However, I was thinking bigger. Early on, I decided I wanted to put this on Source Forge. The reason I designed it the way I did was to allow for expansion of the utility into a GUI-driven application that can jack into a server database and maintain aggregate unit testing statistics on a project. Over the course of time, you can track and get reports on things like test passing rate, test addition rate, which developers write tests, etc. For that, the architecture of the application is very well suited.

The various testing domain objects are read into memory, and the XML test file and HTML output file are treated as just another kind of persistence. So in order to adapt the application in its current incarnation to, say, write the run results to a MySQL or SQL server database, it would only be necessary to add a few classes and modify main to persist the results.

Whether I actually do this or not is still up in there air, and may depend upon how much, if any, interest it generates on Source Forge. If people would find it useful, both in general and specifically on projects I work on, I’m almost certain to do it. If no one cares, I have a lot of projects that I work on.

How to Access

You can download a zipped MSI installer from SourceForge.

If you want to look at the source code, you can do a checkout/export on the following SVN address: https://mstestreportgen.svn.sourceforge.net/svnroot/mstestreportgen

That will provide read-only access to the source.

By

Adapter

Quick Information/Overview

Pattern Type Structural
Applicable Language/Framework Agnostic OOP
Pattern Source Gang of Four
Difficulty Relatively simple

Up Front Definitions

There are no special definitions that I’ll use here not defined inline.

The Problem

In keeping with the vehicle theme from the previous post, let’s say that we have an application that models vehicle operation. Among other things, we have an abstract base vehicle class and some concrete vehicles, “car” and “boat.” The base class defines an abstract “Start()” method and the inheritors override this to define their own starting logic. For the sake of discussion, let’s say the purpose of this application is to graphically model wear and tear on vehicles as they make their way around. That is, we need to be able to move the vehicles around and simulate their internal components in terms of how they do over the course of time. A “Start()” method is vital to this.

Here is a very simple view of what this might look like:

public abstract class Vehicle
{
    public abstract void Start();
}
public class Car : Vehicle
{
    public override void Start()
    {

        //Car start logic elided (probably involves firing the starter object)
    }
}
public class Boat : Vehicle
{
    public override void Start()
    {
        //Boat start logic elided (maybe we're a motor boat and this involves pulling a rip-cord)
    }
}

As you can see, the details of everything but this inheritance relationship are omitted. Now, let’s say that this code was written a while ago, and all of the internal vehicle logic resides in an assembly that compiles to Vehciles.dll. Your company builds this and deploys it, and hundreds of users in the field are using it.

The client code for this is in another assembly or assemblies, and it contains various code and classes that do things like this:

public class VehicleSimulator
{
    private List _fleet;

    public VehicleSimulator(List fleet)
    {
        _fleet = fleet;
    }

    public void DeployFleet()
    {
        _fleet.ForEach(fl => fl.Start()); //Start the vehicles
        //Logic for sending them on their merry way elided
    }

}

In other words, the client code is using the vehicle code polymorphically. Let’s also impose some political constraints here. Let’s say that the client code that you deploy is written by Group A in your department and the car code written by Group B. Group B is in high demand and can’t be bothered to change the Vehicles.DLL unless it’s absolutely necessary and unavoidable. Let’s say that we’re part of group A, and a new requirement comes in.

The bigwigs at the company have just purchased a smaller company that models tanks for the military in the same sort of way that we do with other vehicles. We need to incorporate their Tank implementation into our vehicle modeling software. The now-defunct company has Tank.DLL, and Tank.DLL contains the following code:

public class Tank
{
    public void FireUpTheTank()
    {
        //Clearly, you do something more hardcore than "start" a tank
    }
}

Let’s also say that the bigwigs laid off everybody in the now acquired company, so there’s no real reliable access to their source code and build in the short term. In other words, because of the logistics of the acquisition, we’re stuck with the Tank.cs class as-is, and because of internal politics, we’re stuck with the Vehicle class as-is. We have to use those classes, and the only thing we can change is our client code. (This may be a little contrived, but it’s surprising how often things work out exactly this way in real world coding environments–the arbitrary and silly is often a fact of life.)

So we need to deploy our fleet, which now includes a tank. Let’s see how that looks:

public class VehicleSimulator
{
    private List _fleet;
    private List _tanks;

    public VehicleSimulator(List fleet, List tanks)
    {
        _fleet = fleet;
        _tanks = tanks;
    }

    public void DeployFleet()
    {
        _fleet.ForEach(fl => fl.Start()); //Start the vehicles
        _tanks.ForEach(tank => tank.FireUpTheTank());
        //Logic for sending them on their merry way elided
    }

}

Well, there you have it. We’ve doubled our code and gotten the job done. Now, every time we add new functionality, we’ll just have to add it in parallel for vehicle and tank. So, let’s build and ship this thing, and… wait a second! That’s absolutely ridiculous. We’re rampantly duplicating logic and creating a maintenance nightmare. We’re completely defeating the purpose of the polymorphism of vehicle. There has to be a better way to do this.

So, What to Do?

This is where the Adapter pattern comes in. What we really want here is a way to trick the existing API into thinking that Tank is just another vehicle. That shouldn’t be too hard, since Tank is just another vehicle.

So let’s try something. We’ll just create our own Tank called “TankAdapter” and include it in our client code assembly. We have access to the public members of Vehicle.DLL, so we’ll just define our own inheritor:

public class TankAdapter : Vehicle
{
    public override void Start()
    {
        //All well and good, but we need to start an actual tank at some point
    }
}

Now we’ve partially solved the problem. We can create a vehicle simulator as follows:

var myFleet = new List() { new Car(), new Boat(), new TankAdapter() };
var mySimulator = new VehicleSimulator(myFleet);

This keeps our polymorphism intact and restores the Simulator class’s internals to sanity. With TankAdapter, we have no need to create and maintain the special case for Tank.cs. However, it doesn’t actually do anything yet.

Pulling back and looking at the adapter metaphor in general, adapters are used to hook up two pipes or cords that don’t fit together in and of themselves. So you get thingamajig a with two sides: one that fits with your first pipe, and one that fits with your second. What we’ve done here is craft a TankAdapter part that fits onto the Vehicle “pipe.” We now need to make it fit on the Tank “pipe.”

To do that, we simply give the TankAdapter a Tank object and map the Vehicle methods to the appropriate Tank methods. In this case, it would look like this:

public class TankAdapter : Vehicle
{
    private Tank _wrappedTank;

    public TankAdapter(Tank wrappedTank)
    {
        _wrappedTank = wrappedTank;
    }

    public override void Start()
    {
        _wrappedTank.FireUpTheTank();
    }
}

And, there you have it. The adapter fits both “pipes,” and we had no need to change either one to make this happen. You’ve met your external requirements without compromising the design of the code that you do control. Now you can ship it.

A More Official Explanation

An implementation of the Adapter pattern is going to consist of a few basic entities. The Client is a class or piece of code that is using some Target. A desire to add to the functionality of the Target emerges, but the new class to be used, the Adaptee, is not compatible with the Client. The Client expects a different interface from the one that the Adaptee provides. So an Adapter is introduced that understands what the Client wants and what the Adaptee provides, and it maps Client desires to Adaptee functions.

In our example, VehicleSimulator is the Client, Vehicle is the Target, Tank is the Adaptee, and TankAdapter is the Adapter. VehcileSimulator wants to use Tank, but it can’t, given Tank’s interface. We explored the possibility of changing VehicleSimulator, but decided that the needed changes were simply too burdensome if there were another way. So, not wanting to change Client, and not being able to change Target or Adaptee, we introduced Adapter.

Other Quick Examples

  1. One place you may see this a lot is code that deals with a GUI. For instance, let’s say that you have a bunch of controls that you hide when something happens behind the scenes. Let’s also say you’ve inherited some kind of custom control that you now want to hide, but it doesn’t expose a “Visibility” property. A very common solution here would be to define your own custom control that wraps the existing one and figures out how to hide it.
  2. We’ve talked about this in the context of inheritance, but this also applies to interface polymorphism. Let’s say you have a series of classes that implement IDisposable and that is how you clean up after yourself in some presentation layer code–aggregate the controls and invoke their “Dispose()” methods. If you’re then offered a one-off that doesn’t implement IDisposable, you can define a class that does and wrap your one-off in it to keep with the pattern.
  3. The null object pattern is a very specific example of the Adapter pattern.

A Good Fit – When to Use

The Adapter pattern makes sense when you have two components that need to interact with one another and you’re limited or even prohibited in terms of your ability to change either component. This limitation may be the result of things that we’ve discussed, such as requirements, politics, and good design practices.

Square Peg, Round Hole – When Not to Use

The Adapter Pattern is a hack. Make no mistake about that. It’s a pretty hack, and it’s in the GOF book, but it’s a hack nonetheless. I would advise against using it when you have the ability to change the components that you’re contemplating adapting.

For example, if in our example, we had the ability to modify Tank.cs, it would have made a lot more sense to have it inherit from Vehicle. Even if legacy code depended on its method names, we could simply have defined “Start()” in Tank.cs that called “FireUpTheTank()”. Alternatively, if we had access to Vehicles.DLL, it might have made sense to define our own Tank class and port the functionality from Tank.cs.

In my estimation, either of those options is better than using the Adapter pattern. This mimics adapters in the pipe analogy too–if you bring in a plumber and he’s rebuilding a whole piping system from scratch, why bother cobbling a bunch of old pipes together when they aren’t designed for that? Better just to get the right pipe in the first place.

So What? Why is this Better?

The Adapter Pattern is a better alternative to littering some client code with a bunch of conditional logic. As you saw in the example above, introducing a one-off can create havoc in client code. In our example that maintained a parallel set of Vehicles and Tanks, imagine the maintenance on that class if Vehicle had many methods and many clients. You would have to do that doubling of effort each and every place any method or property on Vehicle was invoked.

By

Abstract Factory

Introduction

This is going to be the first post in a new series that I’m doing on design patterns. Now, I’m certainly not the first person to post a taxonomy of design patterns and explain them, but I would like to put a different spin on the usual. Normally, I see postings that contain a UML diagram and then verbiage like “encapsulates indirection in a creational paradigm between concrete implementers and….” To put it another way, I see definitions that are useful if and only if you already know the definition. So, my idea here is to relate the design patterns in the context of a sto Read More

By

Precondition and Invariant Checking

The Problem

If you find yourself immersed in a code base that is sometimes or often fast and loose with APIs and internal contracts between components, you might get into the habit of writing a lot of defensive code. This applies to pretty much any programming language, but I’ll give examples here in C#, since that’s what I code in most these days. For example, this is probably familiar:


public class MyClass
{

  public Baz ClassBaz { get; set; }

  public void DoSomething(Foo foo, Bar bar)
  {
     if(foo == null) 
     { 
        throw new ArgumentNullException("foo"); 
     }
     if(bar == null)
     {
        throw new ArgumentNullException("bar");
     }
     if(ClassBaz == null)
     {
         throw new InvalidOperationException("Object invariant violation - baz cannot be null");
     }

     //Whew - finally, get down to business using foo, bar, and baz
  }
}

That’s an awful lot of boilerplate for what might be a very simple method. Repeating this all over the place may be necessary at times, but that doesn’t make it pretty or desirable. In an earlier post, I wrote about Microsoft’s Code Contracts as an excellent way to make this pattern a lot more expressive and manageable in terms of lines of code. However, for those times that you don’t have Code Contracts or the equivalent in a different language, I have created the pattern I’m posting about here.

Poor Man’s Code Contracts

As I got tired of writing the code above on a project that I’m currently working on where we do not bring in the Code Contracts DLL, I started thinking about ways to make this sort of checking less verbose. The first thing I did was to start changing my classes on an individual basis to look like this:


public class MyClass
{

  public Baz ClassBaz { get; set; }

  public void DoSomething(Foo foo, Bar bar)
  {
     VerifyArgumentOrThrow(foo);
     VerifyArgumentOrThrow(bar);
     VerifyOrThrow();
     //Get down to business using foo, bar, and baz
  }

  // This guy validates class invariants
  private void VerifyOrThrow()
  {
      VerifyArgumentOrThrow(ClassBaz);
  }

  private void VerifyArgumentOrThrow < T > (T arg) where T : class
  {
     if(arg == null)
     {
        throw new ArgumentNullException("MyClass precondition/invariant");
     }
  }
}

If you’re not familiar with the concept of generics, you can check out this link: generics. Basically, the idea is similar to templating in C++. You define a class or a method that takes a type to-be-defined-later, and it performs some operations on that type. Whenever you have List < Foo > , you’re using the List class, whose signature is actually List < T > . The “where” clause on the method just puts some constraints on what types the compiler will allow–here I’m restricting it to reference (i.e. class) types, since value types are never null and this wouldn’t be appropriate. If you try to pass an int to that method, the compiler will complain.

Anyway, that implementation doesn’t save a lot of code here, but imagine if MyClass had 10 methods. And imagine if there were 400 MyClass-like classes. Then, the savings in terms of LOC really adds up. I was pretty happy with this for a few classes, and then my code smell sense was telling me that I shouldn’t be writing that generic method over and over again.

In the interest of DRY (Don’t Repeat Yourself)

So, what to do? For problem solving, I usually try to think of the simplest thing that will work and then find its faults and ways to improve on it. In this case, the simplest thing that would work would be to make VerifyArgumentOrThrow() a static method. Then I could call it from anywhere and it would only be written once.

Generally, I try to avoid static methods as much as possible because of their negative effect on testability and the fact that they’re procedural in nature. Sometimes they’re unavoidable, but I’m not a big fan. However, I generally try not to let dogma guide me, so instead I tried to think of the actual downside of making this method static.

What if I want a behavior where, sometimes, depending on context, DoSomething() doesn’t throw an exception? I still want to verify/validate the parameters, but I just don’t want an exception (maybe I want to log the problem instead, or maybe I have some kind of backup strategy). Well, with a static method, I would have to define another static method and then have the class determine which method to use. It would have to be passed some kind of flag or else read some kind of global state to see which handling strategy was desired. As more verification strategies emerged, this would get increasingly rigid and ugly. That was really all of the justification that I needed–I decided to implement an instance class.

The ‘Finished’ Product

This solution is what I’ve been using lately. I’m hesitant to call it finished because like anything else, I’m always open to improvements. But this has been serving me well.


public class ArgumentValidator
{
   public virtual VerifyNonNull(T arg) where T : class
   {
      VerifyNonNull(arg, "Invalid argument or member.");
   }

   public virtual VerifyNonNull(T arg, string message) where T : class
   {
      if(arg == null)
      {
        throw new ArgumentNullException("MyClass precondition/invariant");
      }
   }
}

public class MyClass
{

  private ArgumentValidator _validator = new ArgumentValidator();
  public ArgumentValidator Validator { get { return _validator; } set { _validator = value ?? _validator; } }

  public Baz ClassBaz { get; set; }

  public void DoSomething(Foo foo, Bar bar)
  {
     _validator.VerifyNonNull(foo);
     _validator.VerifyNonNull(bar);
     VerifyOrThrow();
     //Get down to business using foo, bar, and baz
  }

  // This guy validates class invariants
  private void VerifyOrThrow()
  {
      _validator.VerifyNonNull(ClassBaz, "ClassBaz cannot be null!");
  }
}

With this scheme, you supply some kind of default validation behavior (and give clients the option to specify an exception message). Here, the default is to throw ArgumentNullException. But you can inherit from ArgumentValidator and do whatever you want. You could have a LoggingValidator that overrides the methods and writes to a log file. You could have an inheritor that throws a different kind of exception or one that supplies some kind of default (maybe creates a new T using Activator). And you can swap all of these in for one another on the fly (or allow clients to do the same, the way MyClass is), allowing context-dependent validation and mocking for test purposes.

Other Ideas

Another thought would be to have IArgumentValidator and a series of implementers. I didn’t choose to go this route, but I see nothing wrong with it whatsoever. I chose the inheritance model because I wanted to be sure there was some default behavior (without resorting to extension methods or other kinds of compiler trickery), but this could also be accomplished with an IArgumentValidator and a default implementer. It just puts a bit more burden on the clients, as they’ll need to know about the interface as well as know which implementation is the default (or, potentially, supply their own).

I’ve also, at times, added more methods to the validator. For instance, in one implementation, I have something called ISelfValidating for entity objects that allows them to express that they have a valid/invalid internal state, through an IsValid property. With that, I added a VerifyValid(ISelfValidating) method signature and threw exceptions on IsValid being false. I’ve also added something that verifies that strings are not null or empty. My only word of caution here would be to avoid making this validator do too much. If it comes down to it, you can group validation responsibilities into multiple kinds of validators with good internal cohesion.

By

Microsoft Pex

I’ve recently started to play around with Microsoft Pex, and I’m discovering (1) that there is a non-trivial learning curve; and (2) that it seems like a pretty powerful tool. As best I can tell at this early stage (and admittedly, I’m certainly a neophyte), there seem to be two major components. The first is the notion of parameterized unit testing (PUT) and the second is automated test generation.

For anyone not familiar with the idea of parameterized unit testing, it’s an idea that rather than the first bit of code below, you can do something that conceptually mirrors the second bit of code, except with better syntax. In the .NET world, I believe that MBUnit supplies this syntax natively (in the form of “row test”) and that users are left to reconstruct it with NUnit and MS Test. I’m too far removed at the moment from unit testing suites in Java or C++ to comment with any authority on those languages and PUTs.

public void SingleValueTest()
{
    var myFoo = new Foo();
    Assert.AreNotEqual(1, myFoo.FooIntProperty);
}

public void ParameterizedTest(int rangeStart, int rangeStop)
{
    var myFoo = new Foo();
    for(int index = rangeStart; index < rangeStop; index++)
        Assert.AreNotEqual(index, myFoo.FooIntProperty);
}

This second example is likely something that we’ve all done, with or without some supporting syntax of the test library. Obviously, unit test methods don’t take parameters, so you might have coded up some standard helper like the method above and call it from actual, decorated test methods (or, if you’re like me, you created an assert extension class that grows as you go). Pex creates a series of method decorating attributes and some of them appeared oriented toward generating tests with these semantics–they pepper your SUT (system under test) with ranges of values and see how it behaves.

But the really snazzy part of Pex that you can’t mimic with helper methods is that it analyzes your code, looking for interesting values. That is, you don’t supply “rangeStart” and “rangeStop” to Pex–it deduces them on its own, based on static and dynamic analysis of your code, and tries them out. The first time I ran Pex, it found all sorts of exception-generating conditions in code that I had thought to be fairly bulletproof, given that I had added a whole slew of contracts using Code Contracts to check preconditions and invariants and enforce post conditions. While somewhat humbling, that is extremely helpful. It’s better than the best possible manual code review (for seeing potential exceptions, at least), and better than the manual tests you generate. It’s automatically executing your code and guiding it towards exception-generating conditions.

Making things even better is the fact that Pex presents the results and gives you the option to actually generate unit tests (in your framework of choice) based on the exceptions it finds. It refers to this as promoting the tests. So the process flow is that you run a “Pex Exploration” which does dynamic analysis (mixed, I believe, with some forms of static analysis) to find exceptional conditions. It then shows you a list of the results of its parameterized runs and whether they passed, failed, or were inconclusive (there is a configurable timeout for each exploration) and allows you to pick and choose which results that you want to promote to unit tests saved in files in your codebase.

One thing that I find a bit off-putting is the fact that the tests it generates are strewn across a number of “.g.cs” partial classes, at least with MS Test. That is really just a matter of personal taste, and I can probably get used to it. It seems generate the partials based on the method in your code that it’s exploring and handle the setup for the call there, and then invoke the main class to actually call the method in question. This is kind of confusing, as the setup occurs in a more specific class and the actual test assert is somewhere else. However, when you open the test from the test result window (which is what I’ve been doing), you can set a breakpoint and step through the heavily attribute-decorated Pex boilerplate, winding up in your own code and seeing the exceptions. And that’s really what matters.

As I play more with this apparently slick and powerful tool, I’ll have more to say about it. I’m playing with it in conjunction with Code Contracts, and there seems to be a natural synergy between these products. I’ll have more to post on both subjects in this series of posts.