DaedTech

Stories about Software

By

Improving Readability using Moq

While I’ve been switching a lot of my new projects over to using JustMock, as explained here, I still have plenty of existing projects with Moq, and I still like Moq as a tool. One of the things that I have struggled with in Moq that JustMock provides a nice solution to is the cognitive dissonance that arises when thinking about how to reason about your test doubles.

In some situations, such as when you’re injecting the test doubles into your class under test, you want them to appear as if they were just another run of the mill instance of whatever interface/base you’re passing. But at other times you want to configure the mock, and then you want to think of them as actual mock objects. One solution to that is to use Mock.Get() as I described here. Doing that, you can store a reference to the object as if it were any other object but access it using Mock.Get() when you want to do things to its setup:

[gist id=”4998372″]

That’s all well and good, but, perfectionist that I am, I got really tired of all of that Mock.Get() ceremony. It makes lines pointlessly longer, and I think really detracts from the readability of the test classes. So I borrowed an idea from JustMock with its “Helpers” namespace and created a series of extension methods. Some examples are shown here.

[gist id=”4998277″]

These extension methods allow me to alter my test to look like this:

[gist id=”4998488″]

Now there’s no need to switch context, so to speak, between mock object and simple dependency. You always have a simple dependency that just so happens to have some extension methods that you can use for configuration. No need to keep things around as Mock<T> and call the Object property, and no need for all of the Mock.Get().

Of course, there are caveats. This might already exist somewhere and I’m out of the loop. There might be issues with this that I haven’t yet hit, and there is the debugging indirection. And finally, you could theoretically have namespace collisions, though if you’re making methods called “Setup” and “Verify” on your classes that take expression trees and multiple generic parameters, I’d say you have yourself a corner case there, buddy. But all that aside, hopefully you find this useful–or at least a nudge in a direction toward making your own tests a bit more readable or easy to work with.

By

TDD: Simplest is not Stupidest

Where the Message Gets Lost In Teaching TDD

I recently answered a programmers’ stackexchange post about test-driven development. (As an aside, it will be cool when me linking to a SE question drives more traffic their way than them linking to me drives my way 🙂 ). As I’m wont to do, I said a lot in the answer there, but I’d like to expand a facet of my answer into a blog post that hopefully clarifies an aspect of Test-Driven Development (TDD) for people–at least, for people who see the practice the way that I do.

One of the mainstays of showcasing test-driven development is to show some extremely bone-headed ways to get tests to pass. I do this myself when I’m showing people how to follow TDD, and the reason is to drive home the point “do the simplest thing.” For instance, I was recently putting together such a demo and started out with the following code:

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void IsEven_Returns_False_For_1()
{
    var inspector = new NumberInspector();
 
    Assert.IsFalse(inspector.IsEven(1));
}

public class NumberInspector
{
    public bool IsEven(int target)
    {
        return false;
    }
}

This is how the code looked after going from the “red” to the “green” portion of the cycle. When I used CodeRush to define the IsEven method, it defaulted to throwing NotImplementedException, which constituted a failure. To make it pass, I just changed that to “return false.”

The reason that this is such a common way to explain TDD is that the practice is generally being introduced to people who are used to approaching problems monolithically, as described in this post I wrote a while back. For people used to solving problems this way, the question isn’t, “how do I get the right value for one,” but rather, “how do I solve it for all integers and how do I ensure that it runs in constant time and is the modulo operator as efficient as bit shifting and what do I do if the user wants to do it for decimal types should I truncate or round or throw an exception and whoah, I’m freaking out man!” There’s a tendency, often fired in the forge of undergrad CS programs, to believe that the entire algorithm has to be conceived, envisioned, and drawn up in its entirety before the first character of code is written.

So TDD is taught the way it is to provide contrast. I show people an example like this to say, “forget all that other stuff–all you need to do is get this one test passing for this one input and just assume that this will be the only input ever, go, now!” TDD is supposed to be fast, and it’s supposed to help you solve just one problem at a time. The fact that returning false won’t work for two isn’t your problem–it’s the problem of you forty-five seconds from now, so there’s no reason for you to bother with it. Live a little–procrastinate!

You refine your algorithm only as the inputs mandate it, and you pick your inputs so as to get the code doing what you want. For instance, after putting in the “return false” and getting the first test passing, it’s pretty apparent that this won’t work for the input “2”. So now you’ve got your next problem–you write the test for 2 and then you set about getting it to pass, say with “return target == 2”. That’s still not great. But it’s better, it was fast, and now your code solves two cases instead of just the one.

Running off the Rails

But there is a tendency I think, as demonstrated by Kristof’s question, for TDD teachers to give the wrong impression. If you were to write a test for 3, “return target == 2” would pass and you might move on to 4. What do you do at 4? How about “return target == 2 || target == 4;”

So far we’ve been moving in a good direction, but if you take off your “simplest thing” hat for a moment and think about basic programming and math, you can see that we’re starting down a pretty ominous path. After throwing in a 6 and an 8 to the or clause, you might simply decide to use a loop to iterate through all even numbers up to int.MaxValue, or-ing a return value with itself to see if target is any of them.

public bool IsEven(int target)
{
    bool isEven = false;
    for (int index = 0; index < int.MaxValue - 1; index += 2)
        isEven |= target == index;
    return isEven;
}

Yikes! What went wrong? How did we wind up doing something so obtuse following the red-green-refactor principles of TDD? Two considerations, one reason: "simplest" isn't "stupidest."

Simplicity Reconsidered

The first consideration is that simple-complex is not measured on the same scale as stupid-clever. The two have a case-by-case, often interconnected relationship, but simple and stupid aren't the same just as complex and clever aren't the same. So the fact that something is the first thing you think of or the most brainless thing that you think of doesn't mean that it's the simplest thing you could think of. What's the simplest way to get an empty boolean method to return false? "return false;" has no branches and one hardcoded piece of logic. What's the simplest way that you could get a boolean method to return false for 1 and true for 2? "return target == 2" accomplishes the task with a single conditional of incredibly simple math. How about false for 1 and true for 2 and 4? "return target % 2 == 0" accomplishes the task with a single conditional of slightly more involved math. "return target == 2 || target == 4" accomplishes the task with a single conditional containing two clauses (could also be two conditionals). Modulo arithmetic is more elegant/sophisticated, but it is also simpler.

Now, I fully understand the importance in TDD of proceeding methodically and solving problems in cadence. If you can't think of the modulo solution, it's perfectly valid to use the or condition and put in another data point such as testing for IsEven(6). Or perhaps you get all tests passing with the more obtuse solution and then spend the refactor phase refining the algorithm. Certainly nothing wrong with either approach, but at some point you have to make the jump from obtuse to simple, and the real "aha!" moment with TDD comes when you start to recognize the fundamental difference between the two, which is what I'll call the second consideration.

The second consideration is that "simplest" advances an algorithm where "stupidest" does not. To understand what I mean, consider this table:

ConditionalClauseChart

In every case that you add a test, you're adding complexity to the method. This is ultimately not sustainable. You'll never wind up sticking code in production if you need to modify the algorithm every time a new input is sent your way. Well, I shouldn't say never--the Brute Forces are busily cranking these out for you to enjoy on the Daily WTF. But you aren't Brute Force--TDD isn't his style. And because you're not, you need to use either the green or refactor phase to do the simplest possible thing to advance your algorithm.

A great way to do this is to take stock after each cycle, before you write your next failing test and clarify to yourself how you've gotten closer to being done. After the green-refactor, you should be able to note a game-changing refinement. For instance:

MilestoneTddChart

Notice the difference here. In the first two entries, we make real progress. We go from no method to a method and then from a method with one hard-coded value to one that can make the distinction we want for a limited set of values. On the next line, our gains are purely superficial--we grow our limited set from distinguishing between 2 values to 3. That's not good enough, so we can use the refactor cycle to go from our limited set to the complete set.

It might not always be possible to go from limited to complete like that, but you should get somewhere. Maybe you somehow handle all values under 100 or all positive or all negative values. Whatever the case may be, it should cover more ground and be more general. Because really, TDD at its core is a mechanism to help you start with concrete test cases and tease out an algorithm that becomes increasingly generalized.

So please remember that the discipline isn't to do the stupidest or most obtuse thing that works. The discipline is to break a large problem into a series of comparably simple problems that you can solve in sequence without getting ahead of yourself. And this is achieved by simplicity and generalizing--not by brute force.

By

JUnit Revisited

Just as a warning, in this short post, I’m going to be writing unit tests that verify that primitives in Java do what they should and basically that gravity is still turned on. The reason for that is that I’d like to showcase some new Java unit testing goodies I’ve recently discovered since coming back into the Java fold a little here and there lately. I firmly believe that the more conversationally readable the contents of unit tests are, the more effective they will be at defining functional and internal requirements as well as showcasing the behavior of the system.

@Test
public void two_ints_are_equal() {
int x = 4;
int y = 4;
assertThat(x, is(y));
}

Coming from the .NET world and using MSTest, I’m used to semantics of Assert.AreEqual<int>(x, y) where, by convention, the “control” or expected value goes on the left and the actual value goes on the right. This is a compelling alternative in that it reads like a sentence, which is always good. The MSTest version reads “Are equal x and y” whereas this reads “x is y.” The less it reads like Yoda is talking, the better. So what enables this goodness?

import static org.junit.Assert.assertThat;
import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.CoreMatchers.not;
import static org.junit.matchers.JUnitMatchers.*;

The first import gives you assertThat(), obviously. assertThat() as shown above takes two parameters (there is an overload that takes a string as an additional parameter to let you specify a failure message): a generic type for the first parameter, and a “matcher” for the second parameter. Matchers perform evaluations on types and can be chained together in fluent fashion to allow construction of sentences that flow. For instance, you can chain the is() matcher and the not() matcher to get the following test:

@Test
public void two_ints_are_not_equal() {
int x = 4;
int y = 5;
assertThat(x, is(not(y)));
}

This really just scratches the surface and there are lots of additional matchers from hamcrest as well. You can even extend the functionality by defining your own matchers to cater to the ubiquitous language of the domain that you’re using. This just barely scratches the surface, but if you’re a java developer and haven’t given these a look, I’d suggest doing so. If you’re a .NET developer, it’s worth taking a peek at what’s going on elsewhere and perhaps defining your own such constructs if you’re feeling ambitious or looking for existing ones. In fact, if you know of good ones, please post ’em — I always like seeing what’s out there.

By

Betrayed by Your Test Runner

The Halcyon Days of Yore

I was writing some code for Apex this morning and I had a strange sense of deja vu. Without going into painful details, Salesforce is a cloud-based CRM solution and Apex is its proprietary programming language that developers can use to customize their applications. The language is sort of reminiscent of a stripped-down hybrid of Java and C#. The IDE you use to develop this code is Eclipse, armed with an Apex plugin.

The deja vu that I experienced transported me back to my college days working in a 200 oldschoollinuxlevel computer systems course where projects assigned to us were the kind of deal involving profs/TAs writing 95% of the code and we filled in the other 5%. I am always grateful to my alma mater for this since one of the things most lacking in university CS education is often concepts like integration and working on large systems. In this particular class, I was writing C code in pico and using a makefile to handle compiling and linking the code on a remote server. This generally took a while because of network latency, server business, it being 13 years ago, a lot of files to link, etc. The end result was that I would generally write a lot of code, run make, and then get up and stretch my legs or get a drink or something, returning later to see what had happened.

This is what developing in Apex reminds me of. But there’s an interesting characteristic of Apex, which is that you have to write unit tests, they have to pass, and they have to cover something like 70% of your code before you’re allowed to run it on their hardware in production. How awesome is that? Don’t you sometimes wish C# or Java enforced that on your coworkers that steal in like ninjas and break half the things in your code base with their checkins? I was pumped when I got this assignment and set about doing TDD, which I’ve done the whole time. I don’t actually know what the minimum coverage is because I’ve been at 100% the entire time.

A Mixed Blessing?

One of the first things that I thought while spacing out and waiting for compile was how much it reminded me of my undergrad days. The second thing I thought of, ruefully, was how much better served I would have been back then to know about unit tests or TDD. I bet that could have same me some maddening debugging sessions. But then again, would I have been better off doing TDD then? And, more interestingly, am I better off doing it now?

Anyone who follows this blog will probably think I’ve flipped my lid and done a sudden 180 on the subject, but that’s not really the case. Consider the following properties of Apex development:

  1. Sometimes when you save, the IDE hangs because files have to go back to the server.
  2. Depending on server load, compile may take a fraction of a second or up to a minute.
  3. It is possible for the source you’re looking at to get out of sync with the feedback from the compiling/testing.
  4. Tests in a class often take minutes to run.
  5. Your whole test suite often takes many, many minutes to run.
  6. Presumably due to server load balancing, network latency and other such factors, feedback time appears entirely non-deterministic.
  7. It’s normal for me to need to close Eclipse via task manager and try again later.

Effective TDD has a goal of producing clean code that clearly meets requirements at the unit level, but it demands certain things of the developer and the development environment.  It is not effective when the feedback loop is extremely slow (or worse, inaccurate) since TDD, by its nature, requires near constant execution of unit tests and for those unit tests to be dependable.

Absent that basic requirement, the TDD practitioner is faced with a conundrum.  Do you stick to the practice where you have red (wait 2 minutes), green (what was I doing again, oh yeah, wait 3 minutes), refactor (oops, I was reading reddit and forgot what I was doing)?  Or do you give yourself larger chunks of time without feedback so that you aren’t interrupted and thrown out of the flow as often?

My advice would be to add “none of the above” to the survey and figure out how to make the feedback loop tighter.  Perhaps, in this case, one might investigate a way to compile/test offline, alter the design, or to optimize somehow.  Perhaps one might even consider a different technology.  I’d rather switch techs than switch away from TDD, myself.  But in the end, if none of these things proves tenable, you might be stuck taking an approach more like one from 20+ years ago: spend a big chunk of time writing code, run it, write down everything that went wrong, and trying again.  I’ll call this RDD — restriction driven development.  I’d say it’s to be avoided at all costs.

I give force.com an A for effort and concept in demanding quality from developers, but I’d definitely have to dock them for the implementation since they create a feedback loop that actively discourages the same.  I’ve got my fingers crossed that as they expand and improve the platform, this will be fixed.

By

Just Starting with JustMock

A New Mocking Tool

In life, I feel that it’s easiest to understand something if you know multiple ways of accomplishing/using/doing/etc it. Today I decided to apply that reasoning to automatic mocking tools for .NET. I’m already quite familiar with Moq and have posted about it a number of times in the past. When I program in Java, I use Mockito, so while I do have experience with multiple mocking tools, I only have experience with one in the .NET world. To remedy this state of affairs and gain some perspective, I’ve started playing around with JustMock by Telerik.

There are two versions of JustMock: “Lite” and “Elevated.” JustMock Lite is equivalent to Moq in its functionality: able to mock things for which their are natural mocking seems, such as interfaces, and inheritable classes. The “Elevated” version provides the behavior for which I had historically used Moles — it is an isolation framework. I’ve been meaning to take this latter for a test drive at some point since the R&D tool Moles has given way to Microsoft “Fakes” as of VS 2012. Fakes ships with Microsoft libraries (yay!) but is only available with VS ultimate (boo!).

My First Mock

Installing JustMock is a snap. Search for it in Nuget, install it to your test project, and you’re done. Once you have it in place, the API is nicely discoverable. For my first mocking task (doing TDD on a WPF front-end for my Autotask Query Explorer), I wanted to verify that a view model was invoking a service method for logging in. The first thing I do is create a mock instance of the service with Mock.Create<T>(). Intuitive enough. Next, I want to tell the mock that I’m expecting a Login(string, string) method to be called on it. This is accomplished using Mock.Arrange().MustBeCalled(). Finally, I perform the actual act on my class under test and then make an assertion on the mock, using Mock.Assert().

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Execute_Invokes_Service_Login()
{
var mockService = Mock.Create();
Target = new LoginViewModel(mockService) { Username = "asdf", Password = "fdsa" };
Mock.Arrange(() => mockService.Login("asdf", "fdsa")).MustBeCalled();
Target.Login.Execute(null);

Mock.Assert(mockService);
}

A couple of things jump out here, particularly if you’re coming from a background using Moq, as I am. First, the semantics of the JustMock methods more tightly follow the “Arrange, Act, Assert” convention as evidenced by the necessity of invoking Arrange() and Assert() methods from the JustMock assembly.

The second thing that jumps out is the relative simplicity of assertion versus arrangement. In my experience with other mocking frameworks, there is a tendency to do comparably minimal setup and have a comparably involved assertion. Conceptually, the narrative would be something like “make the mock service not bomb out when Login() is called and later we’ll assert on the mock that some method called login was called with username x and password y and it was called one time.” With this framework, we’re doing all that description up front and then in the Assert() we’re just saying “make sure the things we stipulated before actually happened.”

One thing that impressed me a lot was that I was able to write my first JustMock test without reading a tutorial. As regular readers know I consider this to be a strong indicator of well-crafted software. One thing I wasn’t as thrilled about was how many overloads there were for each method that I did find. Regular readers also know I’m not a huge fan of that.

But at least they aren’t creational overloads and I suppose you have to pay the piper somewhere and I’ll have either lots of methods/classes in Intellisense or else I’ll have lots of overloads. This bit with the overloads was not a problem in my eyes, however, as I haven’t explored or been annoyed by them at all — I just saw “+10 overloads” in Intellisense and thought “whoah, yikes!”

Another cool thing that I noticed right off the bat was how helpful and descriptive the feedback was when the conditions set forth in Arrange() didn’t occur:

JustMockFeedback

It may seem like a no-brainer, but getting an exception that’s helpful both in its type and message is refreshing. That’s the kind of exception I look at and immediately exclaim “oh, I see what the problem is!”

Matchers

If you read my code critically with a clean code eye in the previous section, you should have a bone to pick with me. In my defense, this snippet was taken post red-green and pre-refactor. Can you guess what it is? How about the redundant string literals in the test — “asdf” and “fdsa” are repeated twice as the username and password, respectively. That’s icky. But before I pull local variables to use there, I want to stop and consider something. For the purpose of this test, given its title, I don’t actually care what parameters the Login() method receives — I only care that it’s called. As such, I need a way to tell the mocking framework that I expect this method to be called with some parameters — any parameters. In the world of mocking, this notion of a placeholder is often referred to as a “Matcher” (I believe this is the Mockito term as well).

In JustMock, this is again refreshingly easy. I want to be able to specify exact matches if I so choose, but also to be able to say “match any string” or “match strings that are not null or empty” or “match strings with this custom pattern.” Take a look at the semantics to make this happen:

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Execute_Invokes_Service_Login()
{
Target = new LoginViewModel(Service) { Username = "asdf", Password = "fdsa" };
Mock.Arrange(() => Service.Login(
Arg.IsAny(),
Arg.Matches(s => !string.IsNullOrEmpty(s))
)).MustBeCalled();
Target.Login.Execute(null);

Mock.Assert(Service);
}

For illustration purposes I’ve inserted line breaks in a way that isn’t normally my style. Look at the Arg.IsAny and Arg.Matches line. What this arrangement says is “The mock’s login method must be called with any string for the username parameter and any string that isn’t null or empty for the password parameter.” Hats off to you, JustMock — that’s pretty darn readable, discoverable and intuitive as a reader of this code.

Loose or Strict?

In mocking there is a notion of “loose” versus “strict” mocking. The former is a scenario where some sort of default behavior is supplied by the mocking framework for any methods or properties that may be invoked. So in our example, it would be perfectly valid to call the service’s Login() method whether or not the mock had been setup in any way regarding this method. With strict mocking, the same cannot be said — invoking a method that had not been setup/arranged would result in a runtime exception. JustMock defaults to loose mocking, which is my preference.

Static Methods with Mock as Parameter

Another thing I really like about JustMock is that you arrange and query mock objects by passing them to static methods, rather than invoking instance methods on them. As someone who tends to be extremely leery of static methods, it feels strange to say this, but the thing that I like about it is how it removes the need to context switch as to whether you’re dealing with the mock object itself or the “stub wrapper”. In Moq, for instance, mocking occurs by wrapping the actual object that is the mocking target inside of another class instance, with that outer class handling the setup concerns and information recording for verification. While this makes conceptual sense, it turns out to be rather cumbersome to switch contexts for setting up/verifying and actual usage. Do you keep an instance of the mock around locally or the wrapper stub? JustMock addresses this by having you keep an instance only of the mock object and then letting you invoke different static methods for different contexts.

Conclusion

I’m definitely intrigued enough to keep using this. The tool seems powerful and usage is quite straightforward, intuitive and discoverable. Look for more posts about JustMock in the future, including perhaps some comparisons and a full fledged endorsement, if applicable (i.e. I continue to enjoy it), when I’ve used it for more than a few hours.