DaedTech

Stories about Software

By

Just Starting with JustMock

A New Mocking Tool

In life, I feel that it’s easiest to understand something if you know multiple ways of accomplishing/using/doing/etc it. Today I decided to apply that reasoning to automatic mocking tools for .NET. I’m already quite familiar with Moq and have posted about it a number of times in the past. When I program in Java, I use Mockito, so while I do have experience with multiple mocking tools, I only have experience with one in the .NET world. To remedy this state of affairs and gain some perspective, I’ve started playing around with JustMock by Telerik.

There are two versions of JustMock: “Lite” and “Elevated.” JustMock Lite is equivalent to Moq in its functionality: able to mock things for which their are natural mocking seems, such as interfaces, and inheritable classes. The “Elevated” version provides the behavior for which I had historically used Moles — it is an isolation framework. I’ve been meaning to take this latter for a test drive at some point since the R&D tool Moles has given way to Microsoft “Fakes” as of VS 2012. Fakes ships with Microsoft libraries (yay!) but is only available with VS ultimate (boo!).

My First Mock

Installing JustMock is a snap. Search for it in Nuget, install it to your test project, and you’re done. Once you have it in place, the API is nicely discoverable. For my first mocking task (doing TDD on a WPF front-end for my Autotask Query Explorer), I wanted to verify that a view model was invoking a service method for logging in. The first thing I do is create a mock instance of the service with Mock.Create<T>(). Intuitive enough. Next, I want to tell the mock that I’m expecting a Login(string, string) method to be called on it. This is accomplished using Mock.Arrange().MustBeCalled(). Finally, I perform the actual act on my class under test and then make an assertion on the mock, using Mock.Assert().

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Execute_Invokes_Service_Login()
{
var mockService = Mock.Create();
Target = new LoginViewModel(mockService) { Username = "asdf", Password = "fdsa" };
Mock.Arrange(() => mockService.Login("asdf", "fdsa")).MustBeCalled();
Target.Login.Execute(null);

Mock.Assert(mockService);
}

A couple of things jump out here, particularly if you’re coming from a background using Moq, as I am. First, the semantics of the JustMock methods more tightly follow the “Arrange, Act, Assert” convention as evidenced by the necessity of invoking Arrange() and Assert() methods from the JustMock assembly.

The second thing that jumps out is the relative simplicity of assertion versus arrangement. In my experience with other mocking frameworks, there is a tendency to do comparably minimal setup and have a comparably involved assertion. Conceptually, the narrative would be something like “make the mock service not bomb out when Login() is called and later we’ll assert on the mock that some method called login was called with username x and password y and it was called one time.” With this framework, we’re doing all that description up front and then in the Assert() we’re just saying “make sure the things we stipulated before actually happened.”

One thing that impressed me a lot was that I was able to write my first JustMock test without reading a tutorial. As regular readers know I consider this to be a strong indicator of well-crafted software. One thing I wasn’t as thrilled about was how many overloads there were for each method that I did find. Regular readers also know I’m not a huge fan of that.

But at least they aren’t creational overloads and I suppose you have to pay the piper somewhere and I’ll have either lots of methods/classes in Intellisense or else I’ll have lots of overloads. This bit with the overloads was not a problem in my eyes, however, as I haven’t explored or been annoyed by them at all — I just saw “+10 overloads” in Intellisense and thought “whoah, yikes!”

Another cool thing that I noticed right off the bat was how helpful and descriptive the feedback was when the conditions set forth in Arrange() didn’t occur:

JustMockFeedback

It may seem like a no-brainer, but getting an exception that’s helpful both in its type and message is refreshing. That’s the kind of exception I look at and immediately exclaim “oh, I see what the problem is!”

Matchers

If you read my code critically with a clean code eye in the previous section, you should have a bone to pick with me. In my defense, this snippet was taken post red-green and pre-refactor. Can you guess what it is? How about the redundant string literals in the test — “asdf” and “fdsa” are repeated twice as the username and password, respectively. That’s icky. But before I pull local variables to use there, I want to stop and consider something. For the purpose of this test, given its title, I don’t actually care what parameters the Login() method receives — I only care that it’s called. As such, I need a way to tell the mocking framework that I expect this method to be called with some parameters — any parameters. In the world of mocking, this notion of a placeholder is often referred to as a “Matcher” (I believe this is the Mockito term as well).

In JustMock, this is again refreshingly easy. I want to be able to specify exact matches if I so choose, but also to be able to say “match any string” or “match strings that are not null or empty” or “match strings with this custom pattern.” Take a look at the semantics to make this happen:

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Execute_Invokes_Service_Login()
{
Target = new LoginViewModel(Service) { Username = "asdf", Password = "fdsa" };
Mock.Arrange(() => Service.Login(
Arg.IsAny(),
Arg.Matches(s => !string.IsNullOrEmpty(s))
)).MustBeCalled();
Target.Login.Execute(null);

Mock.Assert(Service);
}

For illustration purposes I’ve inserted line breaks in a way that isn’t normally my style. Look at the Arg.IsAny and Arg.Matches line. What this arrangement says is “The mock’s login method must be called with any string for the username parameter and any string that isn’t null or empty for the password parameter.” Hats off to you, JustMock — that’s pretty darn readable, discoverable and intuitive as a reader of this code.

Loose or Strict?

In mocking there is a notion of “loose” versus “strict” mocking. The former is a scenario where some sort of default behavior is supplied by the mocking framework for any methods or properties that may be invoked. So in our example, it would be perfectly valid to call the service’s Login() method whether or not the mock had been setup in any way regarding this method. With strict mocking, the same cannot be said — invoking a method that had not been setup/arranged would result in a runtime exception. JustMock defaults to loose mocking, which is my preference.

Static Methods with Mock as Parameter

Another thing I really like about JustMock is that you arrange and query mock objects by passing them to static methods, rather than invoking instance methods on them. As someone who tends to be extremely leery of static methods, it feels strange to say this, but the thing that I like about it is how it removes the need to context switch as to whether you’re dealing with the mock object itself or the “stub wrapper”. In Moq, for instance, mocking occurs by wrapping the actual object that is the mocking target inside of another class instance, with that outer class handling the setup concerns and information recording for verification. While this makes conceptual sense, it turns out to be rather cumbersome to switch contexts for setting up/verifying and actual usage. Do you keep an instance of the mock around locally or the wrapper stub? JustMock addresses this by having you keep an instance only of the mock object and then letting you invoke different static methods for different contexts.

Conclusion

I’m definitely intrigued enough to keep using this. The tool seems powerful and usage is quite straightforward, intuitive and discoverable. Look for more posts about JustMock in the future, including perhaps some comparisons and a full fledged endorsement, if applicable (i.e. I continue to enjoy it), when I’ve used it for more than a few hours.

By

Linq Order By When You Have Property Name

Without reflection, we go blindly on our way, creating more unintended consequences, and failing to achieve anything useful.
–Margaret J. Wheatley

Ordering By a Column Name

Quick tip today in case anyone runs into this.  Frequently you have some strongly typed object and you want to order by some property on that object.  No problem — Linq’s IEnumerable.OrderBy() to the rescue.  But what about when you don’t have a strongly typed object at runtime and you only have the property’s name?

In a little project I’m working on at the moment, this came up. In this project, I’m parsing SQL queries (a subset of SQL, anyway) and translating these queries into web service requests for Autotask. All of the Autotask web service’s entities are children of a base class simply called Entity. Entities have ids in common, but little else. So the situation is that I’m going to get a query of the form “SELECT * FROM Account ORDER BY AccoutName” (i.e. just a string) and I’m going to have to pull out of the API a series of strongly typed objects and figure out how to sort them by “AccountName” at runtime. Tricky part is that I don’t know at compile time what object type I’ll be getting back, much less which property on that type I’ll be using to sort. So something like entities.OrderBy(e => e.AccountName) is obviously right out.

So what we need is a way of mapping the string to a property and then matching that property to a strongly typed value on the object that can be used for ordering.

private static IEnumerable OrderBy(IEnumerable entities, string propertyName)
{
    if (!entities.Any() || string.IsNullOrEmpty(propertyName))
        return entities;

    var propertyInfo = entities.First().GetType().GetProperty(propertyName, BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance);
    return entities.OrderBy(e => propertyInfo.GetValue(e, null));
}

This method first checks a couple of preconditions: actual value supplied for the property name (obviously) and that any entities exist for sorting. This last one might seem a little strange, but it makes sense when you think about it. The reason it makes sense, if you’ll recall my post on type variance, is that the type of the enumerable is generic and strictly a compile time designation. As such, this method is going to be compiled as IEnumerable rather than IEnumerable or any other derivative.

Now, if you did this:

private static IEnumerable OrderBy(IEnumerable entities, string propertyName)
{
    if (!entities.Any() || string.IsNullOrEmpty(propertyName))
        return entities;

    var propertyInfo = typeof(T).GetProperty(propertyName, BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance);
    return entities.OrderBy(e => propertyInfo.GetValue(e, null));
}

…you would have a problem. Since T is going to be compiled as Entity, you’re going to be looking for properties of the derived class using the type information associated with the base class, which will fail, causing the returned propertyInfo to be null and then a null reference exception on the next line. Since we have no way of knowing at compile time what sort of entity we’re going to have, we have to check at run time. And, in order to do that, we need an actual instance of an entity. If we just have an empty enumerable, this is strictly unknowable.

My solution here is a private static method because I have no use for it (yet) in any other scope or class. But, if you were so inclined you could create an extension method pretty easily:

public static IEnumerable OrderBy(this IEnumerable entities, string propertyName)
{
    if (!entities.Any() || string.IsNullOrEmpty(propertyName))
        return entities;

    var propertyInfo = entities.First().GetType().GetProperty(propertyName, BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance);
    return entities.OrderBy(e => propertyInfo.GetValue(e, null));
}

If you were going to do this, I’d suggest making this method a tad more robust, however as you might get a variety of interesting edge cases thrown at it.

By

Scoping And Accessibility Quirks in C#

As I mentioned recently, I’ve taken to using an inheritance scheme in my approach to unit testing. Because of the mechanics of this scheme, making a class under test internal this morning brought to light two relatively obscure properties of scoping and visibility in C# that you might not be aware of:

  1. Internal can be “less visible” than protected.
  2. Private isn’t always private.

Let me explain by showing the situation in which I found myself. As part of an open source project I’m working on at the moment to allow SQL-like querying of Autotask data through its API, I’ve been writing a set of tests on a class called “SqlQuery” in which I take a SQL statement and parse out the parts I’m interested in:

[TestClass]
public class SqlQueryTest
{
    protected SqlQuery Target { get; set; }

    [TestInitialize]
    public void BeforeEachTest()
    {
        Target = new SqlQuery("SELECT id FROM Account");
    }

    [TestClass]
    public class Columns : SqlQueryTest
    {
        [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
        public void Contains_One_Element_For_One_Selected_Column()
        {
            Assert.AreEqual(1, Target.Columns.Count());
        }
...

Up until now the class under test, SqlQuery, has been public, but I realize that this is an abstraction that only matters in the actual lower layer assembly rather than at the GUI level, so I made it internal and added an InternalsVisibleTo to the properties of the assembly under test. With that in place, I downgraded the SqlQuery class to internal and was momentarily surprised by a compiler error of “Inconsistent accessibility: property type ‘AutotaskQueryService.SqlQuery’ is less accessible than property ‘AutotaskQueryServiceTest.SqlQueryTest.Target'”.

KoalaWat

On its face, this seems crazy — “internal” is less accessible than “protected”? But when you think about it, this actually makes sense. “Internal” means “nobody outside of this assembly can see it” and protected means “nobody except for this class and its inheritors can see it.” So what happens if I create a third assembly and declare a class in it that inherits from SqlQueryTest? This class has no visibility to the assembly under test and its internals, but it would have visibility to Target. Hence the strange-seeming but quite correct compiler error. One way to get rid of this error is to make SqlQueryTest internal, and that actually compiled and all tests ran, but I don’t like that solution in the event that I want tests in that class and not just its nested children. I decided on another option: making Target private.

If you look at the code snippet above, are you now thinking “but that won’t compile!” After all “Columns” inherits from SqlQueryTest and uses Target and I’ve now just made Target private, so Columns will lose access to it. Well, no, as it turns out. The private scoping in a class means that only the things between the {} of the class can see it. Our nested class here happens to be one of those things. So the scoping trumps the hierarchy in this instance. This can easily be confirmed by changing Target to static and removing the inheritance relationship, which also compiles. The nested class, even when not deriving from the outer class, can access private static members of the outer class.

In the end, my solution here is simple. I make the Target private and move on. But I thought I’d take the opportunity to point out these interesting facets of C# that you probably don’t run across very often.

By

Test Readability: Best of All Worlds

When it comes to writing tests, I’ve been on sort of a mild, ongoing quest to increase readability. Generally speaking, I follow a pattern of setup, action, verification in all tests. I’ve seen this called other things: given-when-then, etc. But when describing the basic nature of unit tests (especially as compared to integration tests) to people, I explain it by saying “you set the stage, poke it, and see if what happens is what you thought would happen.” This rather inelegant description really captures the spirit of unit testing and why asserts per unit test probably ought to be capped at one as opposed to the common sentiment among first time test writers, often expressed by numbering the tests and having dozens of asserts intermixed with executing code:

I think that was actually the name of a test I saw once: Test_All_The_Things(). I don’t recall whether it included an excited cartoon guy. Point is, that’s sort of the natural desire of the unit testing initiate — big, monolithic tests that are really designed to be end-to-end integration kinds of things where they want to tell in one giant method whether or not everything’s okay. From there, a natural progression occurs toward readability and even requirements documentation.

In my own personal journey, I’ll pick up further along that path. For a long time, my test code was always a monument to isolation, historically. Each method in the test class would handle all of its own setup logic and there would be no common, shared state among the tests. You could pack up the class under test (CUT) and the test method, ship them to Pluto and they would still work perfectly, assuming Pluto had the right version of the .NET runtime. For instance:

[TestClass]
public class MyTestClass
{
     [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
     public void Do_Something_Returns_True()
     {
          var classUnderTest = new ClassUnderTest(); //Setup

          bool actionResult = classUnderTest.DoSomething(); //Poke

          Assert.IsTrue(actionResult); //Verify
     }
}

There are opportunities for optimization though, and I took them. A long time back I read a blog post (I would link if I remember whose) that inspired me to change the structure a little. The test above looks fine, but what happens when you have 10 or 20 tests that verify behaviors of DoSomething() in different circumstances? You wind up with a region and a lot of tests that start with Do_Something. So, I optimized my layout:

[TestClass]
public class MyTestClass
{
     [TestClass]
     public class DoSomething
     {
          [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
          public void Returns_True()
          {
               var classUnderTest = new ClassUnderTest(); //Setup

               bool actionResult = classUnderTest.DoSomething(); //Poke

               Assert.IsTrue(actionResult); //Verify
          }

          [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
          public void Returns_False_When_Really_Is_False()
          {
               var classUnderTest = new ClassUnderTest() { Really = false }; //Setup

               bool actionResult = classUnderTest.DoSomething(); //Poke

               Assert.IsFalse(actionResult); //Verify
          }
     }
}

Now you get rid of regioning, which is a plus in my book, and you still have collapsible areas of the code on which you can focus. In addition, you no longer need to redundantly type the name of the code element that you’re exercising in each test method name. A final advantage is that similar tests are naturally organized together making it easier to, say, hunt down and blow away all tests if you remove a method. That’s all well and good, but it fit poorly with another practice that I liked, which was defining a single point of construction for a class under test:

[TestClass]
public class MyTestClass
{
     private ClassUnderTest BuildCut(bool really = false)
     {
          return new ClassUnderTest() { Really = really };
     }

     [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
     public void Returns_True()
     {
          var classUnderTest = BuildCut(); //Setup

          bool actionResult = classUnderTest.DoSomething(); //Poke

     Assert.IsTrue(actionResult); //Verify
     }

     [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
     public void Returns_False_When_Really_Is_False()
     {
          var classUnderTest = BuildCut(false); //Setup

          bool actionResult = classUnderTest.DoSomething(); //Poke

          Assert.IsFalse(actionResult); //Verify
     }
}

Now, if we decide to add a constructor parameter to our class as we’re doing TDD, it’s a simple change in on place. However, you’ll notice that I got rid of the nested test classes. The reason for that is there’s now a scoping issue — if I want all tests of this class to have access, I have to put it in the outer class, elevate its visibility, and access it by calling MyTestClass.BuildCut(). And for a while, I did that.

But more recently, I had been sold on making tests even more readable by having a simple property called Target that all of the test classes could use. I had always shied away from this because of seeing people who would do horrible, ghastly things in test class state in vain attempts to force the unit test runner to execute their tests sequentially so that some unholy Singleton somewhere would be appeased with blood sacrifice. I tossed the baby with the bathwater — I was too hasty. Look how nicely this cleans up:

[TestClass]
public class MyTestClass
{
     private ClassUnderTest Target { get; set; }

     [TestInitialize]
     public void BeforeEachTest()
     {
          Target = new ClassUnderTest();
     }

     [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
     public void Returns_True()
     {
          //Setup is no longer necessary!

          bool actionResult = Target.DoSomething(); //Poke

          Assert.IsTrue(actionResult); //Verify
     }

     [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
     public void Returns_False_When_Really_Is_False()
     {
          Target.Really = false; //Setup

          bool actionResult = Target.DoSomething(); //Poke

          Assert.IsFalse(actionResult); //Verify
     }
}

Instantiating the CUT, even when abstracted into a method, is really just noise. After doing this for a few days, I never looked back. You really could condense the first test down to a single line, provided everyone agrees on the convention that Target will return a minimally initialized instance of the CUT at the start of each test method. If you need access to constructor-injected dependencies, you can expose those as properties as well and manipulate them as needed.

But we’ve now lost all the nesting progress. Let me tell you, you can try, but things get weird when you try to define the test initialize method in the outer class. What I mean by “weird” is that I couldn’t get it to work and eventually abandoned trying in favor of my eventual solution:

[TestClass]
public class MyTestClass
{
     protected ClassUnderTest Target { get; set; }

     [TestInitialize]
     public void BeforeEachTest()
     {
          Target = new ClassUnderTest();
     }

     [TestClass]
     public class DoSomething : MyTestClass
     {
          [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
          public void Returns_True()
          {
               //Setup is no longer necessary!

               bool actionResult = Target.DoSomething(); //Poke

               Assert.IsTrue(actionResult); //Verify
          }

          [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
          public void Returns_False_When_Really_Is_False()
          {
               Target.Really = false; //Setup

               bool actionResult = Target.DoSomething(); //Poke

               Assert.IsFalse(actionResult); //Verify
          }
     }
}

So at the moment, that is my unit test writing approach in .NET. I have not yet incorporated that refinement into my Java work, so I may post later if that turns out to have substantial differences for any reason. This is by no means a one size fits all approach. I realize that there are as many different schemes for writing tests as test writers, but if you like some or all of the organization here, by all means, use the parts that you like in good health.

Cheers!

By

Type Variance: Generics, Reflection, Anonymous and Dynamic

In working with C# quite heavily over the last few years and talking to a lot of people in different capacities, I’ve noticed a fair bit of confusion on the subject of object types and their attendant rules. So I’d thought I’d offer a post to help understand different factors that go into schemes for declaring types, using them, and keeping them in sync. Hopefully this helps you if you’ve found yourself confused in the past:

Generics

Just about anyone who has ever worked with C# is familiar with generics at least as a client. If you’ve ever declared a List, for instance, you’re using a generic. If you’ve been at it for a while and have a few more notches on your C# belt, you’ve probably written a class with a format like the following:

public class ConsoleWriter
{
public void WriteToConsole(T target)
{
Console.WriteLine(target);
}
}

Useless as it is, this class is instantiated with some type and operates on arguments of that type. For instance, if you declare a writer with integer, it will accept integer inputs and write them to the console:

var writer = new ConsoleWriter();
writer.WriteToConsole(24);

If you try to write to the console with some type other than int here, you’ll wind up with a compiler error. The reason for the error is that generics are a compile-time way of specifying type variance. That is, the compiler knows exactly what T will be as the code is being compiled and thus it is able to tell you if you’re supplying the wrong kind of T. And this makes sense when you think about it — you’re instantiating a List or ConsoleWriter, so the compiler knows that you intend for your instances to deal with ints.

Another way to think about generics is that a generic class is a sort of class “stub” that is waiting for additional information to be supplied when you create instances. That is, a ConsoleWriter is meaningless in a non-instantiated context in the same way as an abstract base class. It needs more information — it needs you to come along and say via instantiation, “this is the kind of ConsoleWriter I’m going to use.”

When you see a “T” (or any other generic descriptor) remember that it will not exist at runtime. In other words, when the program is actually executing, there is no ConsoleWriter<T> but only ConsoleWriter<int> . From a runtime perspective, you may as well have an actual class called “IntConsoleWriter” as far as the runtime is concerned. Generic parameters are meaningless when the application runs.

GetType()

Another way of exploring variability in types is with the object.GetType() method. This method takes advantage of two important facets of C#: (1) the fact that everything inherits from object and (2) reflection. Reflection is concept specific to managed languages (such as Java and C#) that allows executing code to inspect the state of its own source code (in a manner of speaking, but this is an oversimplification). For instance, if you had the class below, it would print “A” and then “B” to the console on separate lines when you called its DescribeYourselfToTheConsole() method.

class SelfDescriber
{
public int A { get; set; }

public int B { get; set; }

public void DescribeYourselfToTheConsole()
{
foreach (var property in this.GetType().GetProperties())
Console.WriteLine(property.Name);
}
}

For those who may not be familiar with reflection or fully aware of its usages, this is quite a powerful concept. As someone who cut his teeth on C and C++, I recall my first exposure to this many moons ago and thinking “you can do what?!?” Awesome!!!” Beware though — reflection is slow and resource intensive.

When you call GetType() on some object, you’re actually accessing the state of that object in code. Unlike generics, this is purely a run-time concept. And that makes sense as well — why would you need to lookup information about a class at compile time? You could just inspect the source code at that point. But at runtime, you might not know exactly what type you’re dealing with. Take a look at this modified version of ConsoleWriter from the last example:

public class ConsoleWriter
{
public void WriteToConsole(object target)
{
Console.WriteLine(string.Format("Value {0} is of type {1}", target, target.GetType()));
}
}

Console writer is getting an object passed to it, but that’s not particularly helpful in understanding more details about it. In unmanaged languages like C++, all you could really do in this situation is cast the object and hope for the best. But in managed languages like C#, you can actually inspect it to see what type it is and then do things with it accordingly. In this example, we simply print out its type, but you also have the option of doing things like saying if(target.GetType() == typeof(int)) or if(target is int) and then doing int-specific things.

Again, all of this happens at run-time when different execution paths dictate non-deterministic instance typing. And that’s just a fancy of way of saying that in an object-oriented, polymorphic language, you can’t know at compile time what instances you’re going to have at runtime since your instance creation may depend on external factors (for instance, if you have factory methods). So reflection is a runtime way to understand what type you’re dealing with.

Anonymous Classes

I’ve seen a lot of confusion around the subject of Anonymous classes. Sometimes I see them confused with dynamic types (covered next) and sometimes I see them misunderstood. Don’t think overthink when it comes to anonymous types — they’re just classes that don’t have names. Why don’t they have names? Because they’re declared inline in code instead of in the standard class structure. For instance:

public void SomeMethod()
{
var noNameType = new { customerId = 1, customerName = "Erik" };
Console.WriteLine(string.Format("Name is {0}", noNameType.customerName));
}

Here I’ve declared a type that has no name, but a Customer by any other name… well, you get the drift. In C# we can declare types inline this way. As it turns out, this comes in quite handy with Linq, IQueryable and its extension methods. It also gives rise to the “var” keyword that you see me use there, which is one of the most misunderstood language constructs there is. If I had a dollar for every time I’ve seen someone mistake this for the VB construct “dim” or some other dynamic typing moniker, I wouldn’t be Warren Buffet, but I’d definitely have enough money for a night on the town. “var” is the syntax for handling implied typing, which is the only means you have for declaring anonymous types (I mean, what else could you do seeing as they don’t have names?)

Anonymous typing is a compile time way of specifying types, like generics. However, unlike generics, anonymous types are not templates. They are normal, full blown, honest to goodness types that simply happen not to have names. If you tried to assign one to int or an int to one, the code would not compile, just as with any other static type that you’d made a class for.

Dynamic

In that last sentence, I mentioned static typing, which describes languages where reference values are declared and checked at compile time. In other words, with static typing, you would say “int x = 6” whereas in a dynamically typed language such as JavaScript you might simply say “x = 6” and thus it is up to the runtime interpreter to figure out that you’ve declared an integer. C# has its roots heavily in the camp of statically typed language, but in C# 4.0, the concept of dynamic typing (sometimes referred to as “duck typing”) was introduced.

The “dynamic” keyword basically tells the C# compiler, “this guy’s with us and he’s cool — no need to check him for weapons.” It’s then waved on through. For instance, the following code compiles without issue:

dynamic noNameType = "asdf";
noNameType = 12;

Assert.AreEqual(12, noNameType.ToString());

However, the test fails with a “RuntimeBinderException” meaning that he did have weapons after all. noNameType was an integer and so its ToString() method evaulated to a string, which normally wouldn’t have compiled but here failed at runtime. You can do all sorts of wacky stuff — instead of “ToString()” you could have said .AsadfFdsafasdfasf() and it still would have compiled (but obviously failed at runtime since that’s not a method on int).

Why would you want to do this, particularly in an otherwise statically typed language? Well, there are some use cases with painful interop and COM scenarios where getting ahold of actual types is surprisingly difficult and simply invoking methods you need to be there is comparably easy. Additionally, there are some interesting concepts like the expando object and doing things like impelenting interfaces at runtime, which can be useful if you’re doing rather intricate things like writing a mocking engine.

I would advise a good bit of caution here, though. C# is not an interpreted language and everyone reading your code and working with it is going to be used to and expecting statically typed code. You should probably only use dynamic typing when it’s onerous not to, unlike the other techniques I’ve covered here.

Wrap Up

So there you have it. We have two typing techniques that are compile time techniques (generics and anonymous types) and two that are run-time techniques (GetType()/reflection and dynamic typing). Generics are classes (or methods) that are templates allowing a type to use to be specified by clients at compile time. GetType() and reflection can be used to explore properties of types at runtime. Anonyomus types allow specification of an actual, first class type without the ceremony of an entire defined class, and dynamics allow us to ‘trick’ the compiler and break the rules… when we know (think) it’ll be okay. Take care with that last one, and use in good health. Hopefully this clears up the odd misconception or confusion you may have had about one or more of these concepts.