DaedTech

Stories about Software

By

Introduction to C# Lambda Expressions

Today, I’m going to post an excerpt of a tutorial I created for the use of Moq on an internal company wiki for training purposes. I presented today on mocking for unit tests and was revisiting some of my old documentation, which included an introductory, unofficial explanation of lambda expressions that doesn’t involve any kind of mathematical lingo about lambda calculus or the term “first class functions”. In short, I created this to help someone who’d never or rarely seen a lambda expression understand what they are.

Lambdas


Since Moq makes heavy use of lambda expressions, I’ll explain a bit about how those work here. Lambdas as a concept are inherently highly mathematical, but I’ll try to focus more on the practical aspects of the construct as they relate to C#.

In general, lambda expressions in the language are of the form (parameters) => (expression). You can think of this as a mapping and we might say that the semantics of this expression is “parameters maps to expression”. So, if we have x => 2*x, we would say “x maps to two times x.” (or, more generally, number maps to number).

In this sense, a lambda expression may be thought of in its most practical programming sense as a procedure. Above, our procedure is taking x and transforming it into 2*x. The “maps to” semantics might more colloquially be called “becomes”. So, the lambda expression x => 2*x translates to “x becomes 2 times x.”

Great. So, why do this? Well, it lets you do some powerful things in C#. Consider the following:

public List FilterOut(List list, int target)
{
  var myList = new List();
  foreach(int myValue in list)
  {
    if(myValue != target)
    {
       myList.Add(myValue);
    }
  }
  return myList;
}

public void SomeMethod(List values)
{
  var myNewList = FilterOut(values, 6);  //Get me a list without the sixes.
}

Here, we have a function that will filter an int value out of a list. It’s a pretty handy function, since it lets you pick the filter value instead of, say, filtering out a hard-coded value. But, let’s say some evil stakeholder comes along and says “well, that’s great, but I want to be able to filter out two values. So, you add a method overload that takes two filters, and duplicate your code. They then come along and say that they want to be able to filter out three values, and they also want to be able to filter out values that are less than or greater than a specified value. At this point, you take a week off because you know your code is about to get really ugly.

Except, lambda expressions to the rescue! What if we changed the game a little and told the stakeholder, “hey, pass in whatever you want for criteria.”

public List FilterOut(List list, Func<int, bool> filterCriteria)
{
  var myList = new List();
  foreach(int myValue in list)
  {
    if(!filterCriteria(myValue))
    {
      myList.Add(myValue);
    }
  }
  return myList;
}

public void SomeMethod(List values)
{
  var myNewList = FilterOut(values, x => x == 6);  //Get me a list without the sixes.
  myNewList = FilterOut(values, x => x == 6 || x == 7); //Get me a list without six or seven.
  myNewList = FilterOut(values, x => x > 6 && x < 12); //Get me a list with no values between 7 and 11.
 //... and anything else you can think of that takes an integer and returns a boolean
}

Now, you don’t have to change your filter code at all, no matter what the stakeholder asks for. You can go back and say to him, “hey, do whatever you want to that integer — I don’t care”. Instead of having him pass you an integer, you’re having him pass you something that says, “integer maps to bool” or “integer becomes bool”. And, you’re taking that mapping and applying it to each element of the list that he’s passing you. For elements being filtered out, the semantics is “integer becomes false” and for elements making the cut “integer becomes true”. He’s passing in the mapping, and you’re doing him the service of applying it to the elements of the list he’s giving you.

In essence, lambda expressions and the mappings/procedures that they represent allow you to create algorithms on the fly, ala the strategy design pattern. This is perfect for writing code where you don’t know exactly how clients of your code want to go about mapping things — only that they do.

As it relates to Moq, here’s a sneak peak. Moq features expressions like myStub.Setup(mockedType => mockedType.GetMeAnInt()).Returns(6);
What this is saying behind the scenes, is “Setup my mock so that anyone who takes my mocked type and maps it to its GetMeAnInt() method gets a 6 back from it” or “Setup my mock so that the procedure MockedType.GetMeAnInt() returns 6.”

(By the way, the link I used for the visual is from this post, which turned out to be a great find. RSS feed added to my reader.)

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

Composite

Quick Information/Overview

Pattern Type Structural
Applicable Language/Framework Agnostic OOP
Pattern Source Gang of Four
Difficulty Easy

Up Front Definitions

  1. Component: This is the abstract object that represents any and all members of the pattern
  2. Composite: Derives from component and provides the definition for the class that contains other components
  3. Leaf: A concrete node that encapsulates data/behavior and is not a container for
    other components.
  4. Tree: For our context here, tree describes the structure of all of the components.

The Problem

Let’s say that you get a call in to write a little console application that allows users to make directories, create empty files, and delete the same. The idea is that you can create the whole structure in memory and then dump it to disk in one fell swoop. Marketing thinks this will be a hot seller because it will allow users to avoid the annoyance of disk I/O overhead when planning out a folder structure (you might want to dust off your resume, by the way).

Simple enough, you say. For the first project spring, you’re not going to worry about sub-directories – you’re just going to create something that will handle files and some parent directory.

So, you create the following class (you probably create it with actual input validation, but I’m trying to stay focused on the design pattern):

public class DirectoryStructure
{
    private const string DefaultDirectory = ".";

    private readonly List _filenames = new List();

    private string _directory = DefaultDirectory;

    public string RootDirectory { get { return _directory; } set { _directory = string.IsNullOrEmpty(value) ? DefaultDirectory : value; } }

    public void AddFile(string filename)
    {
        _filenames.Add(filename);
        _filenames.Sort();
    }

    public void DeleteFile(string filename)
    {
        if (_filenames.Contains(filename))
        {
            _filenames.Remove(filename);
        }
    }

    public void PrintStructure()
    {
        Console.WriteLine("Starting in " + RootDirectory);
        _filenames.ForEach(filename => Console.WriteLine(filename));
    }

    public void CreateOnDisk()
    {
        if (!Directory.Exists(RootDirectory))
        {
            Directory.CreateDirectory(RootDirectory);
        }

        _filenames.ForEach(filename => File.Create(filename));
    }
}

Client code is as follows:

static void Main(string[] args)
{
    var myStructure = new DirectoryStructure();

    myStructure.AddFile("file1.txt");
    myStructure.AddFile("file2.txt");
    myStructure.AddFile("file3.txt");
    myStructure.DeleteFile("file2.txt");

    myStructure.PrintStructure();

    Console.Read();

    myStructure.CreateOnDisk();
}

That’s all well and good, so you ship and start work on sprint number 2, which includes the story that users want to be able to create one level of sub-directories. You do a bit of refactoring, deciding to make root a constructor parameter instead of a settable property, and then get to work on this new story.

You wind up with the following:

public class DirectoryStructure
{
    private const string DefaultDirectory = ".";

    private readonly List _filenames = new List();
    private readonly List _directories = new List();
    private readonly Dictionary> _subDirectoryFilenames = new Dictionary>();

    private readonly string _root;

    public DirectoryStructure(string root = null)
    {
        _root = string.IsNullOrEmpty(root) ? DefaultDirectory : root;
    }


    public void AddFile(string filename)
    {
        _filenames.Add(filename);
        _filenames.Sort();
    }

    public void AddFile(string subDirectory, string filename)
    {
        if (!_directories.Contains(subDirectory))
        {
            AddDirectory(subDirectory);
        }
        _subDirectoryFilenames[subDirectory].Add(filename);
    }

    public void AddDirectory(string directoryName)
    {
        _directories.Add(directoryName);
        _subDirectoryFilenames[directoryName] = new List();
    }

    public void DeleteDirectory(string directoryName)
    {
        if (_directories.Contains(directoryName))
        {
            _directories.Remove(directoryName);
            _subDirectoryFilenames[directoryName] = null;
        }
    }

    public void DeleteFile(string filename)
    {
        if (_filenames.Contains(filename))
        {
            _filenames.Remove(filename);
        }
    }

    public void DeleteFile(string directoryName, string filename)
    {
        if (_directories.Contains(directoryName) && _subDirectoryFilenames[directoryName].Contains(filename))
        {
            _subDirectoryFilenames[directoryName].Remove(filename);
        }
    }

    public void PrintStructure()
    {
        Console.WriteLine("Starting in " + _root);
        foreach (var myDir in _directories)
        {
            Console.WriteLine(myDir);
            _subDirectoryFilenames[myDir].ForEach(filename => Console.WriteLine("\t" + filename));
        }
        _filenames.ForEach(filename => Console.WriteLine(filename));
    }

    public void CreateOnDisk()
    {
        if (!Directory.Exists(_root))
        {
            Directory.CreateDirectory(_root);
        }

        foreach (var myDir in _directories)
        {
            Directory.CreateDirectory(Path.Combine(_root, myDir));
            _subDirectoryFilenames[myDir].ForEach(filename => File.Create(Path.Combine(myDir, filename)));
        }
        _filenames.ForEach(filename => File.Create(filename));
    }
}

and client code:

static void Main(string[] args)
{
    var myStructure = new DirectoryStructure();

    myStructure.AddFile("file1.txt");
    myStructure.AddDirectory("firstdir");
    myStructure.AddFile("firstdir", "hoowa");
    myStructure.AddDirectory("seconddir");
    myStructure.AddFile("seconddir", "hoowa");
    myStructure.DeleteDirectory("seconddir");
    myStructure.AddFile("file2.txt");
    myStructure.AddFile("file3.txt");
    myStructure.DeleteFile("file2.txt");

    myStructure.PrintStructure();

    Console.Read();

    myStructure.CreateOnDisk();
}

Yikes. That’s starting to smell. You’ve had to add overloads for adding file and deleting file, add methods for add/delete directory, and append logic to print and create. Basically, you’ve had to either touch or overload every method in the class. Generally, that’s a surefire sign that you’re doin’ it wrong. But, no time for that now because here comes the third sprint. This time, the business wants two levels of nesting. So, you get started and you see just how ugly things are going to get. I won’t provide all of your code here so that the blog can keep a PG rating, but here’s the first awful thing that you had to do:

private readonly List _filenames = new List();
private readonly List _directories = new List();
private readonly Dictionary> _subDirectoryFilenames = new Dictionary>();
private readonly Dictionary> _subDirectoryNames;
private readonly Dictionary>> _subSubDirectoryFilenames = new Dictionary>>();

You also had to modify or overload every method yet again, bringing the method total to 12 and the complexity of each method to a larger figure. You’re pretty sure you can’t keep this up for an arbitrary number of sprints, so you send out your now-dusted off resume and foist this stinker on the hapless person replacing you.

So, What to Do?

What went wrong here is relatively easy to trace. Let’s backtrack to the start of the second sprint when we needed to support sub-directories. As we’ve seen in some previous posts in this series, the first foray at implementing the second sprint gives off the code smell, primitive obsession. This is a code smell wherein bunches of primitive types (string, int, etc) are used in ad-hoc fashion to operate as some kind of ad-hoc class.

In this case, the smell is coming from the series of lists and dictionaries centering around strings to represent file and directory names. As the sprints go on, it’s time to recognize that there is a need for at least one class here, so let’s create it and call it “SpeculativeDirectory” (so as not to confuse it with the C# Directory class).

public class SpeculativeDirectory
{
    private const string DefaultDirectory = ".";

    private readonly HashSet _subDirectories = new HashSet();

    private readonly HashSet _files = new HashSet();

    private readonly string _name = string.Empty;
    public string Name { get { return _name; } }

    public SpeculativeDirectory(string name)
    {
        _name = string.IsNullOrEmpty(name) ? DefaultDirectory : name;
    }

    public SpeculativeDirectory GetDirectory(string directoryName)
    {
        return _subDirectories.FirstOrDefault(dir => dir.Name == directoryName);
    }

    public string GetFile(string filename)
    {
        return _files.FirstOrDefault(file => file == filename);
    }

    public void Add(string file)
    {
        if(!string.IsNullOrEmpty(file))
            _files.Add(file);
    }

    public void Add(SpeculativeDirectory directory)
    {
        if (directory != null && !string.IsNullOrEmpty(directory.Name))
        {
            _subDirectories.Add(directory);
        }
    }

    public void Delete(string file)
    {
        _files.Remove(file);
    }

    public void Delete(SpeculativeDirectory directory)
    {
        _subDirectories.Remove(directory);
    }

    public void PrintStructure(int depth)
    {
        string myTabs = new string(Enumerable.Repeat('\t', depth).ToArray());
        Console.WriteLine(myTabs + Name);

        foreach (var myDir in _subDirectories)
        {
            myDir.PrintStructure(depth + 1);
        }
        foreach (var myFile in _files)
        {
            Console.WriteLine(myTabs + "\t" + myFile);
        }
    }

    public void CreateOnDisk(string path)
    {
        string myPath = Path.Combine(path, Name);

        if (!Directory.Exists(myPath))
        {
            Directory.CreateDirectory(myPath);
        }

        _files.ToList().ForEach(file => File.Create(Path.Combine(myPath, file)));
        _subDirectories.ToList().ForEach(dir => dir.CreateOnDisk(myPath));
    }

}

And, the DirectoryStructure class is now:

public class DirectoryStructure
{
    private readonly SpeculativeDirectory _root;

    public DirectoryStructure(string root = null)
    {
        _root = new SpeculativeDirectory(root);
    }


    public void AddFile(string filename)
    {
        _root.Add(filename);
    }

    public void AddFile(string directoryName, string filename)
    {
        var myDirectory = _root.GetDirectory(directoryName);
        if (myDirectory != null)
        {
            myDirectory.Add(filename);
        }
    }

    public void AddDirectory(string directoryName)
    {
        _root.Add(new SpeculativeDirectory(directoryName));
    }

    public void DeleteFile(string filename)
    {
        _root.Delete(filename);
    }

    public void DeleteFile(string directoryName, string filename)
    {
        var myDirectory = _root.GetDirectory(directoryName);
        if (myDirectory != null)
        {
            myDirectory.Delete(filename);
        }
    }

    public void DeleteDirectory(string directoryName)
    {
        _root.Delete(directoryName);
    }

    public void PrintStructure()
    {
        _root.PrintStructure(0);
    }

    public void CreateOnDisk()
    {
        _root.CreateOnDisk(string.Empty);
    }
}

The main program that invokes this class is unchanged. So, now, notice the Structure of SpeculativeDirectory. It contains a collection of strings, representing files, and a collection of SpeculativeDirectory, representing sub-directories. For things like PrintStructure() and CreateOnDisk(), notice that we’re now taking advantage of recursion.

This is extremely important because what we’ve done here is future proofed for sprint 3 much better than before. It’s still going to be ugly and involve more and more overloads, but at least it won’t require defining increasingly nested (and insane) dictionaries in the DirectoryStructure class.

Speaking of DirectoryStructure, does this class serve a purpose anymore? Notice that the answer is “no, not really”. It basically defines a root directory and wraps its operations. So, let’s get rid of that before we do anything else.

To do that, we can just change the client code to the following and delete DirectoryStructure:

static void Main(string[] args)
{
    var myDirectory = new SpeculativeDirectory(".");
    myDirectory.Add("file1.txt");
    myDirectory.Add(new SpeculativeDirectory("firstdir"));
    myDirectory.GetDirectory("firstdir").Add("hoowa");
    var mySecondDir = new SpeculativeDirectory("seconddir");
    myDirectory.Add(mySecondDir);
    myDirectory.GetDirectory("seconddir").Add("hoowa");
    myDirectory.Delete(mySecondDir);
    myDirectory.Add("file2.txt");
    myDirectory.Add("file3.txt");
    myDirectory.Delete("file2.txt");

    myDirectory.PrintStructure(0);

    Console.Read();

    myDirectory.CreateOnDisk(".");

}

Now, we’re directly using the directory object and we’ve removed a class in addition to cleaning things up. The API still isn’t perfect, but we’re gaining some ground. So, let’s turn our attention now to cleaning up SpeculativeDirectory. Notice that we have a bunch of method pairs: GetDirectory/GetFile, Add(Directory)/Add(string), Delete(Directory)/Delete(string). This kind of duplication is a code smell — we’re begging for polymorphism here.

Notice that we are performing operations routinely on SpeculativeDirectory and performing the same operations on the string representing a file. It is worth noting that if we had a structure where file and directory inherited from a common base or implemented a common interface, we could perform operations on them just once. And, as it turns out, this is the crux of the command pattern.

Let’s see how that looks. First, we’ll define a SpeculativeFile object:

public class SpeculativeFile
{
    private readonly string _name;
    public string Name { get; }

    public SpeculativeFile(string name)
    {
        _name = name ?? string.Empty;
    }

    public void Print(int depth)
    {
        string myTabs = new string(Enumerable.Repeat('\t', depth).ToArray());
        Console.WriteLine(myTabs + Name);
    }

    public void CreateOnDisk(string path)
    {
        File.Create(Path.Combine(path, _name));
    }
}

This is pretty simple and straightforward. The file class knows how to print itself and how to create itself on disk, and it knows that it has a name. Now our task is to have a common inheritance model for file and directory. We’ll go with an abstract base class since they are going to have common implementations and file won’t have an implementation, per se, for add and delete. Here is the common base:

public abstract class SpeculativeComponent
{
    private readonly string _name;
    public string Name { get { return _name; } }

    private readonly HashSet _children = new HashSet();
    protected HashSet Children { get { return _children; } }

    public SpeculativeComponent(string name)
    {
        _name = name ?? string.Empty;
    }

    public virtual SpeculativeComponent GetChild(string name) { return null; }

    public virtual void Add(SpeculativeComponent component) { }

    public virtual void DeleteByName(string name) { }

    public void Print()
    {
        Print(0);
    }

    public void CreateOnDisk()
    {
        CreateOnDisk(Name);
    }

    protected virtual void Print(int depth)
    {
        string myTabs = new string(Enumerable.Repeat('\t', depth).ToArray());
        Console.WriteLine(myTabs + Name);

        foreach (SpeculativeComponent myChild in _children)
        {
            myChild.Print(depth + 1);
        }
    }

    protected virtual void CreateOnDisk(string path)
    {
        foreach (var myChild in _children)
        {
            myChild.CreateOnDisk(Path.Combine(path, Name));
        }
    }
        
}

A few things to note here. Fist of all, our recursive Print() and CerateOnDisk() methods are divided into two methods each, one public, one private. This is continue to allow for recursive calls but without awkwardly forcing the user to pass in zero or empty or whatever for path/depth. Notice also that common concerns for the two different types of nodes (file and directory) are now here, some stubbed as do-nothing virtuals and others implemented. The reason for this is conformance to the pattern — while files and directories share some overlap, some operations are clearly not meaningful for both (particularly adding/deleting and anything else regarding children). So, you do tend to wind up with the leaves (SpeculativeFile) ignoring inherited functionality, this is generally a small price to pay for avoiding duplication and the ability to recurse to n levels.

With this base class, we have pulled a good bit of functionality out of the file class, which is now this:

public class SpeculativeFile : SpeculativeComponent
{
    public SpeculativeFile(string name) : base(name) {}

    protected override void CreateOnDisk(string path)
    {
        File.Create(Path.Combine(path, Name));
        base.CreateOnDisk(path);
    }
}

Pretty simple. With this new base class, here is new SpeculativeDirectory class:

public class SpeculativeDirectory : SpeculativeComponent
{
    public SpeculativeDirectory(string name) : base(name) { }

    public override SpeculativeComponent GetChild(string name)
    {
        return Children.FirstOrDefault(child => child.Name == name);
    }

    public override void Add(SpeculativeComponent child)
    {
        if(child != null)
            Children.Add(child);
    }

    public override void DeleteByName(string name)
    {
        var myMatchingChild = Children.FirstOrDefault(child => child.Name == name);
        if (myMatchingChild != null)
        {
            Children.Remove(myMatchingChild);
        }
    }

    protected override void CreateOnDisk(string path)
    {
        string myPath = Path.Combine(path, Name);
        if (!Directory.Exists(myPath))
        {
            Directory.CreateDirectory(myPath);
        }

        base.CreateOnDisk(path);
    }
}

Wow. A lot more focused and easy to reason about, huh? And, finally, here is the new API:

static void Main(string[] args)
{
    var myDirectory = new SpeculativeDirectory(".");
    myDirectory.Add(new SpeculativeFile("file1.txt"));
    myDirectory.Add(new SpeculativeDirectory("firstdir"));
    myDirectory.GetChild("firstdir").Add(new SpeculativeFile("hoowa"));
    myDirectory.Add(new SpeculativeDirectory("seconddir"));
    myDirectory.GetChild("seconddir").Add(new SpeculativeFile("hoowa"));
    myDirectory.DeleteByName("seconddir");
    myDirectory.Add(new SpeculativeFile("file2.txt"));
    myDirectory.Add(new SpeculativeFile("file3.txt"));
    myDirectory.DeleteByName("file2.txt");

    myDirectory.Print();

    Console.Read();

    myDirectory.CreateOnDisk();

}

Even the API has improved since our start. We’re no longer creating this unnatural “structure” object. Now, we just create root directory and add things to it with simple API calls in kind of a fluent interface.

Now, bear in mind that this is not as robust as it could be, but that’s what you’ll do in sprint 3, since your sprint 2 implemented sub-directories for N depths of recursion and not just one. 🙂

A More Official Explanation

According to dofactory, the Composite Pattern’s purpose is to:

Compose objects into tree structures to represent part-whole hierarchies. Composite lets clients treat individual objects and compositions of objects uniformly.

What we’ve accomplished in rescuing the speculative directory creation app is to allow the main program to perform operations on nodes in a directory tree without caring whether they are files or directories (with the exception of actually creating them). This is most evident in the printing and writing to disk. Whether we created a single file or an entire directory hierarchy, we would treat it the same for creating on disk and for printing.

The elegant concept here is that we can build arbitrarily large structures with arbitrary conceptual tree structures and do things to them uniformly. This is important because it allows the encapsulation of tree node behaviors within the objects themselves. There is no master object like the original DirectoryStructure that has to walk the tree, deciding how to treat each element. Any given node in the tree knows how to treat itself and its sub-elements.

Other Quick Examples

Other places where one might use the composite pattern include:

  1. GUI composition where GUI widgets can be actual widgets or widget containers (Swing java, WPF XAML, etc).
  2. Complex Chain of Responsibility structures where some nodes handle events and others simply figure out who to hand them over to
  3. A menu structure where nodes can either be actions or sub-menus.

A Good Fit – When to Use

The pattern is useful when (1) you want to represent an object hierarchy and (2) there is some context in which you want to be able to ignore differences between the objects and treat them uniformly as a client. This is true in our example here in the context of printing the structure and writing it to disk. The client doesn’t care whether something is a file or a directory – he just wants to be able to iterate over the whole tree performing some operation.

Generally speaking, this is good to use anytime you find yourself looking at a conceptual tree-like structure and iterating over the whole thing in the context of a control flow statement (if or case). In our example, this was achieved indirectly by different method overloads, but the result in the end would have been the same if we had been looking at a single method with a bit of logic saying “if node is a file, do x, otherwise, do y”.

Square Peg, Round Hole – When Not to Use

There are some subtle considerations here. You don’t want to use this if all of your nodes can be the same type of object, such as the case of some simple tree structure. If you were, for example, creating a sorted tree for fast lookup, where each node had some children and a payload, this would not be an appropriate use of composite.

Another time you wouldn’t want to use this is if a tree structure is not appropriate for representing what you want to do. If our example had not had any concept of recursion and were only for representing a single directory, this would not have been appropriate.

So What? Why is this Better?

The code cleanup here speaks for itself. We were able to eliminate a bunch of method overloads (or conditional branching if we had gone that way), making the code more maintainable. In addition, it allows elimination of a structure that rots as it grows — right out of the box, the composite pattern with its tree structure allows handling of arbitrarily deep and wide tree structures. And, finally, it allows clients to walk the tree structure without concerning themselves with what kind of nodes its processing and how to navigate to the next nodes.

By

“Classworthiness” and Poor Cohesion

Finding Seams in Code

The other day, someone asked me about how to “get at” certain things that he wanted to unit test in a class that he was writing. That is, he had some code that he wanted to extract from a public method and test, but his only solution was to expose the new helper method as public in order to do this. My response to this strategy is that it doesn’t make sense to alter the public interface of your class to accommodate testing. Your class has some ‘natural’ set of inputs and outputs, and those are what you want to test – elevating visibility, reflection, and other trickery is not the way to test the class, since that’s not how it will be used, eventually.

When asked what I would do instead, I replied that for simple helper methods, I just let the tests of their callers stand. For more complex ones, I pull functionality out into a new class and give the old class a private reference to the new class. This is where I got an interesting response — he didn’t think that there was “enough” to justify a new class.

I asked what constituted “enough”, and prefaced the question by saying that his was a common and understandable sentiment. Over the course of time, I’ve grown used to people having certain heuristics for when code is “classworthy”. Perhaps if it’s somewhere between 3 methods and 20 methods. Or, maybe it’s a matter of lines of code — 50 lines is too little, but 1000 is too many. With enough of these heuristics in place, we could say that all methods should be 20 lines and all classes should be 10 methods and therefore 200 lines.

His response was what people’s response always seems to be (paraphrased): I can’t describe “classworthiness”, but I know it when I see it, and this isn’t it. I countered by offering a different kind of heuristic for “classworthiness”, but I’ll get to that later.

Cleaving Classes in Two

The next day, I found myself looking at the following code structure:

public class MyClass : IDoSomething, IDoSomethingUnrelated
{
    #region IDoSoemthingStuff
    //IDoSomething fields
    //IDoSomething properties
    //IDoSomething methods

    #endregion

    #region IDoSomethingUnrelated
    //IDoSomethingUnrelated fields
    //IDoSomethingUnrelated properties
    //IDoSomethingUnrelated methods

    #endregion
}

What we had here was two classes inexplicably crammed into a single class. But, that wasn’t what was concerning me (though I wasn’t thrilled about it). My task was to add the IDoSomethingUnrelated interface implementation to the following:

public class MySecondClass : IDoSomething
{
    #region IDoSoemthingStuff
    //IDoSomething fields
    //IDoSomething properties
    //IDoSomething methods

    #endregion
}

My first thought was the same thing that I hope yours is. Since there were two completely separate concerns in the first class, I would simply extract one of them to its own class and take it from there. Both classes still had to implement both interfaces (this was a non-negotiable constraint imposed on me), but at least I wouldn’t need to duplicate code, and I could make the internals of the class cleaner. The Boy Scout Rule for clean coding is an excellent one to follow.

However, there was a snag. Over the course of time, a handful of the fields had bled between the two classes masquerading as one. Not enough to give you the sense that it was anything but an accident and certainly not enough to make the concerns even remotely cohesive. It appeared to have happened by accident – expediency, one flag serving two purposes, laziness, that kind of thing.

As I was scowling at this and realizing that my exorcism of the possessed class was going to be more involved than I thought, a relationship between this task and my earlier conversation struck me.

A Root Cause of Poor Cohesion

So, how did this state of affairs come to be? Surely nobody looked at this code and thought, as they added the most recent handful of lines, “this is clean and awesome — go me!” Instead, somebody probably thought, “yuck, this is a pretty big class, but I’ll hold my nose and pile on just a touch” (the class was in the 1500 line neighborhood). Was I the first boy scout to get here and say “enough!” Apparently, I was, or it wouldn’t have looked like that.

But, surely, someone else must have considered it. My guess is that they considered extracting a class, but that they decided against it when they found, as I had, that the mess wasn’t quite as easy to detangle as it appeared. This class had become just cohesive enough that refactoring was a pain, but not cohesive enough to be considered anything other than “almost completely lacking cohesion”.

It wasn’t always like that. At some point, this thing was easily separable (if at no other time, at least when it was mashed together initially). But, between then and now, regioning was created, and fields/flags were carelessly and haphazardly re-appropriated depending on context. So, the real question is why someone didn’t see this as two separate classes right from the get-go. I traced the history of it back some and saw that when the two-classes-as-one paradigm was introduced, the class was relatively small in size compared to the rest of the code base. It has since grown steadily.

To think of it another way, when the class was first conceived, neither of its main responsibility was “classworthy” on its own. Not enough methods in the interfaces, or not enough lines of code, or something. Better to reduce the class footprint and combine them, and we can always separate them when the chimera gets too big. Of course, the best laid plans of mice and men… the person who thought that was probably not the same person that started introducing unnecessary cross-coupling, and neither of those people was me, looking at this class thinking that it should be two classes, and realizing that performing the separation surgery wasn’t going to be as easy as one might think.

Getting “Classworthy” Wrong

To return to my conversation about when to break out a new class, I responded that, to me, “classworthiness” had nothing to do with the number of methods, properties or fields, nothing to do with too many or too few lines of code, and nothing to do with how many classes exist in the solution at the moment. These are all red herrings as far as the decision as to what a class should be. A concept is “classworthy” to me if it encapsulates behavior and (usually) state, does one thing and does that thing well, and has only one reason to change (see Single Responsibility Princple).

If you read about the SRP, you’ll notice that there’s nothing in there about how many lines of code or how many methods are in the class. Don’t get me wrong — I’m very leery of classes that are 500+ lines of code and have 20+ methods, but this is a symptom rather than the illness. Those classes are (usually) bad not because of any number, but because something with that much code and that many methods generally isn’t focused and cohesive — it’s generally not doing one thing and doing it well. It’s a Swiss Army Knife Class.


(Image from Derick Bailey at Los Techies).

I’ve never really understood the reluctance to create a new class. It’s as if they were a limited resource like oil, and people want to conserve them. This isn’t limited to the person that I was talking to — it’s fairly pervasive, in my experience. I do understand that you can take my sentiment to an absurd extreme and create a class for every line of code or some such thing (though this would result in different metrics problems by raising coupling), but that isn’t what I’m talking about. I’m talking about reluctance to factor out a class because it will only have 70 lines of code or it will only have two methods.

If you find yourself thinking this way, pull back for a minute and ask yourself whether the code you’re considering pulling out is a separate behavior or not. If you have a class that receives data transfer objects from the GUI and validates them, the “and” tells you that you are, in fact, doing two separate things. Perhaps the validation logic just checks two fields to see if they’re empty or not. Is it really worth a separate class for that?

Yes! Why not decouple validation from staging? You’re not going to deplete the world’s class reserves with that one class creation. But, what you will get is a scheme where the validation of your object is decoupled from the staging of that object and where, if one of those things changes, the other will not be affected. And, to tie this all back together, you also get a nice, new seam for unit testing.

By

Command

Quick Information/Overview

Pattern Type Behavioral
Applicable Language/Framework Agnostic OOP
Pattern Source Gang of Four
Difficulty Easy – Moderate

Up Front Definitions

  1. Invoker: This object services clients by exposing a method that takes a command as a parameter and invoking the command’s execute
  2. Receiver: This is the object upon which commands are performed – its state is mutated by them

The Problem

Let’s say you get a request from management to write an internal tool. A lot of people throughout the organization deal with XML documents and nobody really likes dealing with them, so you’re tasked with writing an XML document builder. The user will be able to type in node names and pick where they go and whatnot. Let’s also assume (since this post is not about the mechanics of XML) that all XML documents consist of a root node called “Root” and only child nodes of root.

The first request that you get in is the aforementioned adding. So, knowing that you’ll be getting more requests, your first design decision is to create a DocumentBuilder class and have the adding implemented there.

/// This class is responsible for doing document build operations
public class DocumentBuilder
{
    /// This is the document that we're doing to be modifying
    private readonly XDocument _document;

    /// Initializes a new instance of the DocumentBuilder class.
    /// 
    public DocumentBuilder(XDocument document = null)
    {
        _document = document ?? new XDocument(new XElement("Root"));
    }

    /// Add a node to the document
    /// 
    public void AddNode(string elementName)
    {
        _document.Root.Add(new XElement(elementName));
    }
}

//Client code:
var myInvoker = new DocumentBuilder();
myInvoker.AddNode("Hoowa!");

So far, so good. Now, a request comes in that you need to be able to do undo and redo on your add operation. Well, that takes a little doing, but after 10 minutes or so, you’ve cranked out the following:

public class DocumentBuilder
{
    /// This is the document that we're doing to be modifying
    private readonly XDocument _document;

    /// Store things here for undo
    private readonly Stack _undoItems = new Stack();

    /// Store things here for redo
    private readonly Stack _redoItems = new Stack();

    /// Initializes a new instance of the DocumentBuilder class.
    /// 
    public DocumentBuilder(XDocument document = null)
    {
        _document = document ?? new XDocument(new XElement("Root"));
    }

    /// Add a node to the document
    /// 
    public void AddNode(string elementName)
    {
        _document.Root.Add(new XElement(elementName));
        _undoItems.Push(elementName);
        _redoItems.Clear();
    }

    /// Undo the previous steps operations
    public void Undo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myName = _undoItems.Pop();
            _document.Root.Elements(myName).Remove();
            _redoItems.Push(myName);
        }
    }

    /// Redo the number of operations given by steps
    public void Redo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myName = _redoItems.Pop();
            _document.Root.Add(new XElement(myName));
            _undoItems.Push(myName);
        }
    }
}

Not too shabby - things get popped from each stack and added to the other as you undo/redo, and the redo stack gets cleared when you start a new "branch". So, you're pretty proud of this implementation and you're all geared up for the next set of requests. And, here it comes. Now, the builder must be able to print the current document to the console. Hmm... that gets weird, since printing to the console is not really representable by a string in the stacks. The first thing you think of doing is making string.empty represent a print operation, but that doesn't seem very robust, so you tinker and modify until you have the following:

public class DocumentBuilder
{
    /// This is the document that we're doing to be modifying
    private readonly XDocument _document;

    /// This defines what type of operation that we're doing
    private enum OperationType
    {
        Add,
        Print
    }

    /// Store things here for undo
    private readonly Stack> _undoItems = new Stack>();

    /// Store things here for redo
    private readonly Stack> _redoItems = new Stack>();

    /// Initializes a new instance of the DocumentBuilder class.
    /// 
    public DocumentBuilder(XDocument document = null)
    {
        _document = document ?? new XDocument(new XElement("Root"));
    }

    /// Add a node to the document
    /// 
    public void AddNode(string elementName)
    {
        _document.Root.Add(new XElement(elementName));
        _undoItems.Push(new Tuple(OperationType.Add, elementName));
        _redoItems.Clear();
    }

    /// Print out the document
    public void PrintDocument()
    {
        Print();

        _redoItems.Clear();
        _undoItems.Push(new Tuple(OperationType.Print, string.Empty));
    }

    /// Undo the previous steps operations
    public void Undo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myOperation = _undoItems.Pop();
            switch (myOperation.Item1)
            {
                case OperationType.Add:
                    _document.Root.Elements(myOperation.Item2).Remove();
                    _redoItems.Push(myOperation);
                    break;
                case OperationType.Print:
                    Console.Out.WriteLine("Sorry, but I really can't undo a print to screen.");
                    _redoItems.Push(myOperation);
                    break;
            }
        }
    }

    /// Redo the number of operations given by steps
    public void Redo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myOperation = _redoItems.Pop();
            switch (myOperation.Item1)
            {
                case OperationType.Add:
                    _document.Root.Elements(myOperation.Item2).Remove();
                    _undoItems.Push(myOperation);
                    break;
                case OperationType.Print:
                    Print();
                    _undoItems.Push(myOperation);
                    break;
            }
        }
    }

    private void Print()
    {
        var myBuilder = new StringBuilder();
        Console.Out.WriteLine("\nDocument contents:\n");
        using (var myWriter = XmlWriter.Create(myBuilder, new XmlWriterSettings() { Indent = true, IndentChars = "\t" }))
        {
            _document.WriteTo(myWriter);
        }
        Console.WriteLine(myBuilder.ToString());
    }
}

Yikes, that's starting to smell a little. But, hey, you extracted a method for the print, and you're keeping things clean. Besides, you're fairly proud of your little tuple scheme for recording what kind of operation it was in addition to the node name. And, there's really no time for 20/20 hindsight because management loves it. You need to implement something that lets you update a node's name ASAP.

Oh, and by the way, they also want to be able to print the output to a file instead of the console. Oh, and by the by the way, you know what would be just terrific? If you could put something in to switch the position of two nodes in the file. They know it's a lot to ask right now, but you're a rock star and they know you can handle it.

So, you buy some Mountain Dew and pull an all nighter. You watch as the undo and redo case statements grow vertically and as your tuple grows horizontally. The tuple now has an op code and an element name like before, but it has a third argument that means the new name for update, and when the op-code is swap, the second and third arguments are the two nodes to swap. It's ugly (so ugly I'm not even going to code it for the example), but it works.

And, it's a success! Now, the feature requests really start piling up, and not only are stakeholders using your app, but other programmers have started using your API. There's really no time to reflect on the design now - you have a ton of new functionality to implement. And, as you do it, the number of methods in your builder will grow as each new feature is added, the size of the case statements in undo and redo will grow with each new feature is added, and the logic for parsing your swiss-army knife tuple is going to get more and more convoluted.

By the time this thing is feature complete, it's going to take a 45 page developer document to figure out what on Earth is going on. Time to start putting out resumes and jump off this sinking ship.

So, What to Do?

Before discussing what to do, let's first consider what went wrong. There are two main issues here that have contributed to the code rot. The first and most obvious is the decision to "wing it" with the Tuple solution that is, in effect, a poor man's type. Instead of a poor man's type, why not an actual type? The second issue is a bit more subtle, but equally important -- violation of the open/closed principle.

To elaborate, consider the original builder that simply added nodes to the XDocument and the subsequent change to implement undo and redo of this operation. By itself, this was fine and cohesive. But, when the requirements started to come in about more operations, this was the time to go in a different design direction. This may not be immediately obvious, but a good question to ask during development is "what happens if I get more requests like this?" When the class had "AddNode", "Undo" and "Redo", and the request for "PrintDocument" came in, it was worth noting that you were cobbling onto an existing class. It also would have been reasonable to ask, "what if I'm asked to add more operations?"

Asking this question would have resulted in the up-front realization that each new operation would require another method to be added to the class, and another case statement to be added to two existing methods. This is not a good design -- especially if you know more such requests are coming. Having an implementation where new the procedure for accommodating new functionality is "tack another method onto class X" and/or "open method X and add more code" is a recipe for code rot.

So, let's consider what we could have done when the request for document print functionality. Instead of this tuple thing, let's create another implementation. What we're going to do is forget about creating Tuples and forget about the stacks of string, and think in terms of a command object. Now, at the moment, we only have one command object, but we know that we've got a requirement that's going to call for a second one, so let's make it polymorphic. I'm going to introduce the following interface:

public interface IDocumentCommand
{
    /// Document (receiver) upon which to operate
    XDocument Document { get; set; }

    /// Execute the command
    void Execute();
        
    /// Revert the execution of the command
    void UndoExecute();

}

This is what will become the command in the command pattern. Notice that the interface defines two conceptual methods - execution and negation of the execution (which should look a lot like "do" and "undo"), and it's also going to be given the document upon which to do its dirty work.

Now, let's take a look at the add implementer:

public class AddNodeCommand : IDocumentCommand
{
    private readonly string _nodeName;

    private XDocument _document = new XDocument();
    public XDocument Document { get { return _document; } set { _document = value ?? _document; } }

    /// Initializes a new instance of the AddNodeCommand class.
    /// Note the extra parameter here -- this is important.  This class essentially conceptually
    /// an action, so you're more used to seeing things in method form like this.  We pass in the "method" parameters
    /// to the constructor because we're encapsulating an action as an object with state
    public AddNodeCommand(string nodeName)
    {
        _nodeName = nodeName ?? string.Empty;
    }
    public void Execute()
    {
        Document.Root.Add(new XElement(_nodeName));
    }

    public void UndoExecute()
    {
        Document.Root.Elements(_nodeName).Remove();
    }
}

Pretty straightforward (in fact a little too straightforward - in a real implementation, there should be some error checking about the state of the document). When created, this object is seeded with the name of the node that it's supposed to create. The document is a setter dependency, and the two operations mutate the XDocument, which is our "receiver" in the command pattern, according to the pattern's specification.

Let's have a look at what our new Builder implementation now looks like before adding print document:

public class DocumentBuilder
{
    /// This is the document that we're doing to be modifying
    private readonly XDocument _document;

    /// Store things here for undo
    private readonly Stack _undoItems = new Stack();

    /// Store things here for redo
    private readonly Stack _redoItems = new Stack();

    /// Initializes a new instance of the DocumentBuilder class.
    /// 
    public DocumentBuilder(XDocument document = null)
    {
        _document = document ?? new XDocument(new XElement("Root"));
    }

    /// Add a node to the document
    /// 
    public void AddNode(string elementName)
    {
        var myCommand = new AddNodeCommand(elementName)  { Document = _document };
        myCommand.Execute();
        _undoItems.Push(myCommand);
        _redoItems.Clear();
    }

    /// Undo the previous steps operations
    public void Undo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myCommand = _undoItems.Pop();
            myCommand.UndoExecute();
            _redoItems.Push(myCommand);
        }
    }

    /// Redo the number of operations given by steps
    public void Redo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myCommand = _redoItems.Pop();
            myCommand.Execute();
            _undoItems.Push(myCommand);
        }
    }
}

Notice that the changes to this class are subtle but interesting. We now have stacks of commands rather than strings (or, later, tuples). Notice that undo and redo now delegate the business of executing the command to the command object, rather than figuring out what kind of operation it is and doing it themselves. This is critical to conforming to the open/closed principle, as we'll see shortly.

Now that we've performed our refactoring, let's add the print document functionality. This is now going to be accomplished by a new implementation of IDocumentCommand:

public class PrintCommand : IDocumentCommand
{
    private XDocument _document = new XDocument();
    public XDocument Document { get { return _document; } set { _document = value ?? _document; } }

    /// Execute the print command
    public void Execute()
    {
        var myBuilder = new StringBuilder();
        Console.Out.WriteLine("\nDocument contents:\n");
        using (var myWriter = XmlWriter.Create(myBuilder, new XmlWriterSettings() { Indent = true, IndentChars = "\t" }))
        {
            Document.WriteTo(myWriter);
        }
        Console.WriteLine(myBuilder.ToString());
    }

    /// Undo the print command (which, you can't)
    public void UndoExecute()
    {
        Console.WriteLine("\nDude, you can't un-ring that bell.\n");
    }
}

Also pretty simple. Let's now take a look at how we implement this in our "invoker", the DocumentBuilder:

public class DocumentBuilder
{
    /// This is the document that we're doing to be modifying
    private readonly XDocument _document;

    /// Store things here for undo
    private readonly Stack _undoItems = new Stack();

    /// Store things here for redo
    private readonly Stack _redoItems = new Stack();

    /// Initializes a new instance of the DocumentBuilder class.
    /// 
    public DocumentBuilder(XDocument document = null)
    {
        _document = document ?? new XDocument(new XElement("Root"));
    }

    /// Add a node to the document
    /// 
    public void AddNode(string elementName)
    {
        var myCommand = new AddNodeCommand(elementName) { Document = _document };
        myCommand.Execute();
        _undoItems.Push(myCommand);
        _redoItems.Clear();
    }

    /// Print the document
    public void PrintDocument()
    {
        var myCommand = new PrintCommand() { Document = _document};
        myCommand.Execute();
        _undoItems.Push(myCommand);
        _redoItems.Clear();
    }

    /// Undo the previous steps operations
    public void Undo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myCommand = _undoItems.Pop();
            myCommand.UndoExecute();
            _redoItems.Push(myCommand);
        }
    }

    /// Redo the number of operations given by steps
    public void Redo(int steps)
    {
        for (int index = 0; index < steps; index++)
        {
            var myCommand = _redoItems.Pop();
            myCommand.Execute();
            _undoItems.Push(myCommand);
        }
    }
}

Lookin' good! Observe that undo and redo do not change at all. Our invoker now creates a command for each operation, and delegate its work to the receiver on behalf of the client code. As we continue to add more commands, we do not ever have to modify undo and redo.

But, we still don't have it quite right. The fact that we need to add a new class and a new method each time a new command is added is still a violation of the open/closed principle, even though we're better off than before. The whole point of what we're doing here is separating the logic of command execution (and undo/redo and, perhaps later, indicating whether a command can currently be executed or not) from the particulars of the commands themselves. We're mostly there, but not quite - the invoker, DocumentBuilder is still responsible for enumerating the different commands as methods and creating the actual command objects. The invoker is still too tightly coupled to the mechanics of the commands.

This is not hard to fix - pass the buck! Let's look at an implementation where the invoker, instead of creating commands in named methods, just demands the commands:

public class DocumentBuilder
{
    /// This is the document that the user will be dealing with
    private readonly XDocument _document;

    /// This houses commands for undo
    private readonly Stack _undoCommands = new Stack();

    /// This houses commands for redo
    private readonly Stack _redoCommands = new Stack();

    /// User can give us an xdocument or we can create our own
    public DocumentBuilder(XDocument document = null)
    {
        _document = document ?? new XDocument(new XElement("Root"));
    }

    /// Executes the given command
    /// 
    public void Execute(IDocumentCommand command)
    {
        if (command == null) throw new ArgumentNullException("command", "nope");
        command.Document = _document;
        command.Execute();
        _redoCommands.Clear();
        _undoCommands.Push(command);
    }

    /// Perform the number of undos given by iterations
    public void Undo(int iterations)
    {
        for (int index = 0; index < iterations; index++)
        {
            if (_undoCommands.Count > 0)
            {
                var myCommand = _undoCommands.Pop();
                myCommand.UndoExecute();
                _redoCommands.Push(myCommand);
            }
        }
    }

    /// Perform the number of redos given by iterations
    public void Redo(int iterations)
    {
        for (int index = 0; index < iterations; index++)
        {
            if (_redoCommands.Count > 0)
            {
                var myCommand = _redoCommands.Pop();
                myCommand.UndoExecute();
                _undoCommands.Push(myCommand);
            }
        }
    }
}

And, there we go. Observe that now, when new commands are to be added, all a maintenance programmer has to do is author a new class. That's a much better paradigm. Any bugs related to the mechanics of do/undo/redo are completely separate from the commands themselves.

Some might argue that the new invoker/DocumentBuilder lacks expressiveness in its API (having Execute(IDocumentCommand) instead of AddNode(string) and PrintDocument()), but I disagree:

var myInvoker = new DocumentBuilder();
myInvoker.Execute(new AddNodeCommand("Hoowa"));
myInvoker.Execute(new PrintCommand());
myInvoker.Undo(2);
myInvoker.Execute(new PrintCommand());

Execute(AddCommand(nodeName)) seems just as expressive to me as AddNode(nodeName), if slightly more verbose. But even if it's not, the tradeoff is worth it, in my book. You now have the ability to plug new commands in anytime by implementing the interface, and DocumentBuilder conforms to the open/closed principle -- it's only going to change if there is a bug in the way the do/undo/redo logic is found and not when you add new functionality (incidentally, having only one reason to change also makes it conform to the single responsibility principle).

A More Official Explanation

dofactory defines the command pattern this way:

Encapsulate a request as an object, thereby letting you parameterize clients with different requests, queue or log requests, and support undoable operations.

The central, defining point of this pattern is the idea that a request or action should be an object. This is an important and not necessarily intuitive realization. The natural tendency would be to implement the kind of ad-hoc logic from the initial implementation, since we tend to think of objects as things like "Car" and "House" rather than concepts like "Add a node to a document".

But, this different thinking leads to the other part of the description - the ability to parameterize clients with different requests. What this means is that since the commands are stored as objects with state, they can encapsulate their own undo and redo, rather than forcing the invoker to do it. The parameterizing is the idea that the invoker operates on passed in command objects rather than doing specific things in response to named methods.

What is gained here is then the ability to put commands into a stack, queue, list, etc, and operate on them without specifically knowing what it is they do. That is a powerful ability since separating and decoupling responsibilities is often hard to accomplish when designing software.

Other Quick Examples

Here are some other places that the command pattern is used:

  1. The ICommand interface in C#/WPF for button click and other GUI events.
  2. Undo/redo operations in GUI applications (i.e. Ctrl-Z, Ctrl-Y).
  3. Implementing transactional logic for persistence (thus providing atomicity for rolling back)

A Good Fit – When to Use

Look to use the command pattern when there is a common set of "meta-operations" surrounding commands. That is, if you find yourself with requirements along the lines of "revert the last action, whatever that action may have been." This is an indicator that there are going to be operations on the commands themselves beyond simple execution. In scenarios like this, it makes sense to have polymorphic command objects that have some notion of state.

Square Peg, Round Hole – When Not to Use

As always, YAGNI applies. For example, if our document builder were only ever going to be responsible for adding nodes, this pattern would have been overkill. Don't go implementing the command pattern on any and all actions that your program may take -- the pattern incurs complexity overhead in the form of multiple objects and a group of polymorphs.

So What? Why is this Better?

As we've seen above, this makes code a lot cleaner in situations where it's relevant and it makes it conform to best practices (SOLID principles). I believe that you'll also find that, if you get comfortable with this pattern, you'll be more inclined to offer richer functionality to clients on actions that they may take.

That is, implementing undo/redo or atomicity may be something you'd resist or avoid as it would entail a great deal of complexity, but once you see that this need not be the case, you might be more willing or even proactive about it.

In, using this pattern where appropriate is better because it provides for cleaner code, fewer maintenance headaches, and more clarity.

By

Poor Man’s Automoq in .NET 4

So, the other day I mentioned to a coworker that I was working on a tool that would wrap moq and provide expressive test double names. I then mentioned the tool AutoMoq, and he showed me something that he was doing. It’s very simple:

 [TestClass]
    public class FeatureResetServiceTest
    {
        #region Builders

        private static FeatureResetService BuildTarget(IFeatureLocator locator = null)
        {
            var myLocator = locator ?? new Mock().Object;
            return new FeatureResetService(myLocator);
        }

        /// If you're going out of your way to pass null instead of empty, something is wrong
        [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
        public void ResetToDefault_Throws_NullArgumentException()
        {
            var myService = BuildTarget();

            ExtendedAssert.Throws(() => myService.ResetFeaturesToDefault(null));
        }

The concept here is very simple. If you’re using a dependency injection scheme (manual or IoC container), your classes may evolve to need additional dependencies, which sucks if you’re declaring them inline in every method. This means that you need to engage in a flurry of find/replace, which is a deterrent from changing your constructor signature, even when that’s the best way to go.

This solution, while not perfect, definitely eliminates that in a lot of cases. The BuildTarget() method takes an optional parameter (hence the requirement for .NET 4) for each constructor parameter, and if said parameter is not supplied, it creates a simple mock.

There are some obvious shortcomings — if you remove a reference from your constructor, you still have to go through all of your tests and remove the extra setup code for that dependency, for instance, but this is still much better than the old fashioned way.

I’ve now adopted this practice where suitable and am finding that I like it.

(ExtendedAssert is a utility that I wrote to address what I consider shortcomings in MSTest).