DaedTech

Stories about Software

By

JUnit for C# 3 – Mocks and Other Niceties

Edit: It occurs to me that the name here was kind of an oops. If you’re here to see how I use JUnit while developing in C#, you’re probably going to be disappointed. I meant to title this “JUnit for C# Developers 3”, but made a rather comical omission. My apologies.

As I go along with this series of posts, I’ve come to a decision. My plan is to get my new, open source home automation server working at the level of functionality my old, struts-based one currently works. I think I’m going to muddle through TDD and posting my adventures for as long as that takes, and then I’ll probably add it to github to see if anyone wants to pull, and move on to other posting topics (like my practical design patterns series that I’ve been a little slow on lately). But, for now, I’ll keep on with these.

Goals for Today

Since my last two posts were more of a whim, I decided to get organized a little now that I’m in the swing of it. So, my goals for today’s post are the following:

  1. Find out whether or not Java now supports optional parameters.
  2. Figure out how to assert that a method throws an exception
  3. Figure out how to run a single unit test only
  4. Get setup with a mocking framework.

I figure that’s a bite-sized chunk for an hour or two, so let’s get started.

Actual Work

So, first up is Java and default parameters. The answer there seems to be a resounding “no” (it’s acquired foreach and instanceof, so I figured it was worth a shot). I saw this stackoverflow post, and upvoted the question while I was at it, but the answer seemed to be no. Given that the post was somewhat outdated, I checked around in some other places as well with the same findings. Bummer. The reason I wanted to find this out for my TDD is that I’ve adopted a pattern of doing something like this for my tests:

        /*
	 * This is here for TDD to stop me from needing to change every test
	 * if I decide to inject a xtor param
	 */
	private static LightController buildTarget(LightManipulationService service = null) {
		LightManipulationService myService = service != null ? service : new MockLightManipulationService();
		return new LightController(myService);
	}

Basically, instead of directly instantiating the class under test (CUT), I delegate that responsibility to this builder method. That way, if I decide to add a constructor parameter to the CUT, I don’t have to bother with the tiresome chore of updating all of my tests. And, adding a constructor parameter is a rather frequent occurrence for me when doing TDD.

But, it turns out that I’ll have to settle for the noisiness of a method overload to accomplish this. Perhaps its the purist in me, but I think default parameters in C# (and other languages) are a much more elegant solution to this problem than method overloads. I hate boilerplate code — it’s just more places you have to maintain and more places mistakes could be made. So, first goal accomplished, if not in a satisfying way. More on the builder and supplying an interface to the controller later.

Next up, I want to add a constructor parameter to my controller, as you may have intuited. The purpose of this light controller is to allow a user to turn lights in my house on and off with a RESTful URL scheme. The actual mechanics of lights on/off is accomplished via a shell command that invokes a driver my server is running. However, it is wildly inappropriate for a presentation layer controller to know the details of how that works, so I’m abstracting out a conceptual service:

public interface LightManipulationService {

	/**
	 * Turns the light in question on or off
	 * @param light - the light to toggle
	 * @param isOn - the setting (true for on, false for off)
	 * @return whether or not the operation was successful
	 */
	Boolean toggleLight(Light light, Boolean isOn);
	
	/**
	 * Change the brightness of a light
	 * @param light - the light to modify
	 * @param brightnessChange - the brightness change (positive for brighter, negative for dimmer)
	 * @return whether or not the operation succeeded
	 */
	Boolean ChangeBrightness(Light light, int brightnessChange);
}

“Light” is a POJO that I made to encapsulate properties for the room containing the light and the name of the light. The controller will operate by parsing the URL For the room and light parameters and then passing a corresponding light object to the service, which will take care of the actual light operations in a nod to the single responsibility principle.

Now, I want to inject an implementation of this interface into my controller and, furthermore, I want to throw an exception if a client injects null. After all, the controller for lights can’t operate in any meaningful way if it doesn’t have a service that actually does things to the lights. And this is where goal number (2) comes in. It turns out that testing for a thrown exception is pretty straightforward:

@Test(expected=IllegalArgumentException.class)
	public void constructor_Throws_Exception_On_Null_Service_Argument() {
		new LightController((LightManipulationService)null);
	}

That’s all there is to it. As an aside, I’m pretty impressed with Eclipse’s ability to take action during my TDD. For instance, when I instantiated the controller this way, I got a red won’t-compile squiggly as one would expect. As an option for fixing, I was allowed to declare a new constructor, ala CodeRush in Visual Studio (truth be told, VS may offer this too, but I’ve been using CodeRush for so long I don’t remember).

Now, my next goal was figuring out how to run an individual test, mostly for my own edification. Back to stackoverflow where I upvoted another question and answer:

In the package explorer unfold the class. It should show you all methods. Right click on the one method you want to run, then select Run As -> JUnit from the context menu (just tested with Eclipse 3.4.1). Also selecting “Run” on a single entry in the JUnit-results view to re-run a test works in the same way.

Sure enough, that did it. I can run it by right clicking the method or by highlighting it and using Ctrl-Shift-X, T. This is good enough for now, though what I’d really like is the ability that CodeRush and Visual Studio both confer to run a test with a key shortcut with my cursor inside the test. Perhaps that’ll be a goal for next time.

Now, for the meat of this post, a mocking framework. After getting that last test to pass, I now have a problem in that my code won’t compile, since I have another test that needs to inject something into the controller to get it to pass. For a mocking framework, I decided on Mockito. I chose this framework based entirely on “what did James Shore use in Let’s Play TDD”. My philosophy, generally speaking, is “get it working, optimize later”, so picking any framework and using it is better than deliberating long and hard. And, if a guy like James is using it, it’s probably worthwhile.

Installation was easy. I downloaded the jar from the download site and created a directory in my eclipse folder called “externaljars” where I placed it. I have no idea if this is a good practice or not, but a tutorial I looked at suggested creating a C:\mockito directory and I really prefer not to create clutter in root or anywhere else. Until someone tells me why not to, I’ll just stick these things in a sub-directory of Eclipse that I include in my build path.

So, next, I included this directory in my build path. πŸ™‚ From there, I just added the mockito import and defined an overload that I mentioned while fulfilling goal (1), and I had this CUT:

@Controller
@RequestMapping("/light")
public class LightController {

	public LightController(LightManipulationService lightManipulationService) {
		if(lightManipulationService == null) throw new IllegalArgumentException("lightManipulationService");
	}

	@RequestMapping("/light")
	public ModelAndView light() {
		return new ModelAndView();
	}
}

And 4 passing tests:


package com.daedtech.daedalustest.controller;

import static org.junit.Assert.*;
import static org.mockito.Mockito.*;

import java.lang.annotation.Annotation;
import java.lang.reflect.Method;

import org.junit.Test;
import org.springframework.util.Assert;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.servlet.ModelAndView;

import com.daedtech.daedalus.controller.LightController;
import com.daedtech.daedalus.services.LightManipulationService;

public class LightControllerTest {

	private static LightController buildTarget() {
		return buildTarget(null);
	}
	
	private static LightController buildTarget(LightManipulationService service) {
		LightManipulationService myService = service != null ? service : mock(LightManipulationService.class);
		return new LightController(myService);
	}
	
	@Test(expected=IllegalArgumentException.class)
	public void constructor_Throws_Exception_On_Null_Service_Argument() {
		new LightController((LightManipulationService)null);
	}
	
	@Test
	public void light_With_No_Parameters_Returns_Instance_Of_Model_And_View() {
		
		LightController myController = buildTarget();
		
		Assert.isInstanceOf(ModelAndView.class, myController.light());		
		
	}
	
	@Test
	public void light_Is_Decorated_With_RequestMapping_Annotation() throws NoSuchMethodException, SecurityException {
		
		Class myClass = LightController.class;
		Method myMethod = myClass.getMethod("light");
		Annotation[] myAnnotations = myMethod.getDeclaredAnnotations();
		
		Assert.isTrue(myAnnotations[0] instanceof RequestMapping);
	}
	
	@Test
	public void light_RequestMapping_Has_Parameter_Light() throws NoSuchMethodException, SecurityException {
		
		Class myClass = LightController.class;
		Method myMethod = myClass.getMethod("light");
		Annotation[] myAnnotations = myMethod.getDeclaredAnnotations();
		
		String myAnnotationParameter = ((RequestMapping)myAnnotations[0]).value()[0];
		
		assertEquals("/light", myAnnotationParameter);
	}
}

Now, we’re getting somewhere! This class is going to be functional pretty soon!

By

Basic Unit Testing with JUnit for C# Developers 2

Last night, I posted about my adventures with TDD in Java from a C# developer’s perspective. As I start to shake my Java rust off a bit, I’m enjoying this more and more, so I think I’ll keep this series going for at least a bit, documenting some of my trials, travails, successes and failures. I don’t know that I intend to turn this into a long-running series, but I’m hoping to throw enough up to get a test-conscious C# developer off and running with Java.

Briefly, Some Good References

So, as part of this adventure, and to get off on the right foot, I’ve been referencing some external information. James Shore has been working on his blog series, “Let’s Play TDD” for over a year now. This is an excellent idea for those trying to get familiar with TDD as a practice. For me, I’m more interested in seeing the simple mechanics of testing in Java, such as where the handiest place to put the JUnit window is. Seriously. It sounds lame, but watching video of someone unit test in Eclipse is incredibly helpful for showing me little details that I’ve been missing and wouldn’t have thought to google.

Another reference is Jakob Jenkov’s tutorial on reflection for Java annotations. If you’ll recall, I mentioned this last time and, as it turns out, this, like many thing in life is possible. So, on that note and without further ado, here’s some code!

And Now For the Code!

	@Test
	public void light_Is_Decorated_With_RequestMapping_Annotation() throws NoSuchMethodException, SecurityException {
		
		Class myClass = LightController.class;
		Method myMethod = myClass.getMethod("light");
		Annotation[] myAnnotations = myMethod.getDeclaredAnnotations();
		
		Assert.isTrue(myAnnotations[0] instanceof RequestMapping);
	}
	
	@Test
	public void light_RequestMapping_Has_Parameter_Light() throws NoSuchMethodException, SecurityException {
		
		Class myClass = LightController.class;
		Method myMethod = myClass.getMethod("light");
		Annotation[] myAnnotations = myMethod.getDeclaredAnnotations();
		
		String myAnnotationParameter = ((RequestMapping)myAnnotations[0]).value()[0];
		
		assertEquals("/light", myAnnotationParameter);
	}

These are two new tests that I added. The first one reflects on the light controller class, seeing if the light() method has an annotation of type RequestAttribute. The second test takes it a step further and sees if the first value of request mapping is “/light” (this is the base URL to which I’m going to map).

And, here is the updated code that this drove:

@Controller
@RequestMapping("/light")
public class LightController {

	@RequestMapping("/light")
	public ModelAndView light() {
		return new ModelAndView();
	}
}

All I added was the annotation to light(). And this, unlike the last, more contrived example, I did in true TDD fashion. At this point, I should mention that I found a stack overflow question about whether or not testing for the presence of annotations made sense. Accepted answer seemed to say that it’s fine with a couple of dissenting responses below that.

Personally, as a mild digression, I find the dissent baffling, particularly if those people are familiar with TDD. I’m looking at my light controller class, which needs an annotation to work properly within the Spring MVC framework. It doesn’t currently have one. So… case closed. If I’m following TDD in earnest, I cannot go adding this without a red test. Uncle Bob is pretty clear on this point in his three rules of TDD: “You are not allowed to write any production code unless it is to make a failing unit test pass.” Now, I fancy myself more purist than pragmatist, so the reasoning behind this that speaks to me is that this is a testable alteration I’m making to my class, so why wouldn’t I test it?

Java-Things I’ve Learned

Here are a few random things I learned during tonight’s foray into Java TDD:

  • A more traditional import for asserts is org.junit.Assert.*;
  • “import static” versus just import allows me to use static methods without a qualifying type or being a child class of the class containing the static method. This feels icky to me, like C# extension methods, but I’m grudgingly using it for now with my tests and assert (I may revert to traditional import).
  • Java has a foreach loop: (for myString : someStringArray). During my last go-round with Java, I’m pretty sure that this didn’t exist yet.
  • Java has isinstanceof keyword. For my fellow C# travelers, here is your version of if(x is Foo)

By

Basic Unit Testing with JUnit for C# Developers

As I’ve blogged previously, I’ve become increasingly dependent on TDD to the point where I’m basically addicted to the practice. I start to get nervous and twitchy if I’m writing code that isn’t driven by tests — it feels like putting a mop into a bucket of filthy water and then using it to ‘clean’. In other words, writing code without tests feels like pushing dirt around aimlessly while having no positive effect.

But, I digress. The purpose of this post today is to document my implementation of TDD in Java using JUnit, coming from two solid years of almost exclusive C# work. So, bear in mind that I may make some mistakes here or violate some best practices (and feel free to comment and correct me), but it’s my hope that I get the basics right and perhaps can help someone else going from C# to Java.

First Things First

I won’t go into a lot of detail here, but I’m using Eclipse and have set myself up for Spring development. I had created a small, working Spring web app, and I had a little code here, but wanted to test first with any new code. To do this, I followed my C#/Visual Studio instincts and went to create a separate project containing my tests. About 85% of people from a smattering of languages favored this approach in a poll by Phil Haack, and the approach earned an answer and a vote, if not top honors, on stackoverflow.

When you go this route in Eclipse, there is no JUnit project to create, so you just create a standard java project. I did this and populated it with a directory structure mirroring that of my actual application, putting the tests in the ‘same’ package as their class under test counterparts. And then, really all that was needed was to import the org.junit.Test library which, apparently, was already wherever it needed to be (I realize that this is not helpful if you don’t have it, but this really isn’t the emphasis of this post).

Onto the Tests

The first thing I did was to create a class called LightControllerTest, as I was interested in creating a LightController class. And, I need that class to have a method called light() that would return a ModelAndView. So, I created the following test:

package com.daedtech.daedalus.controller;

import org.junit.Test;
import org.springframework.util.Assert;
import org.springframework.web.servlet.ModelAndView;

public class LightControllerTest {

	/*
	 * This should return an instance of model and view (apparently)
	 */
	@Test
	public void light_With_No_Parameters_Returns_Instance_Of_Model_And_View() {
		
		LightController myController = new LightController();
		
		Assert.isInstanceOf(ModelAndView.class, myController.light());		
		
	}
}

A few things to note here, fellow C# developers. One is that the equivalent of MSTest [TestMethod] is the java @Test annotation. This tells the test runner that this is a unit test. Another thing to note is that I’m using the spring framework’s assert, which may not be applicable if you’re not using Spring MVC. There is also JUnit’s assert available to you. I chose the Spring one because it had isInstanceOf(), which reminded me of MSTest’s “Assert.IsInstanceOfType()”.

So, with my test written and not compiling, I wrote the following code:

package com.daedtech.daedalus.controller;

import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.servlet.ModelAndView;

@Controller
@RequestMapping("/light")
public class LightController {

	public ModelAndView light() {
		//return new ModelAndView();
		return null;
	}
}

Now, I was primed to have a red test instead of a non-compiling one, but I needed to run the test itself. In Eclipse, there are various ways of doing it, but the closest I could come to Ctrl-R, T was Alt-Shift-X, T. Good enough – that seems to scope them the way MSTest does as well, with only the one test running, even though I defined another in a different class. But, as with Visual Studio, there are a number of different ways to run the tests — from the little green “play” button dropdown, from the context menu right clicking on the project, from within the JUnit window that appears once you run the tests, etc. So, I ran the test, saw it fail, and deleted the return null line in favor of the one that would make it pass. A little contrived, I realize, but you’ll have to cut me a bit of slack as I iron out the early kinks. Later, I’ll write tests that fail before they pass — I promise.

I’ll have to play for a bit to get myself really familiar with the ins and outs, and I’ll probably follow with more posts like this. I’m also going to be muddling my way through other random issues like “is it appropriate (or even possible) to test that a method is annotated” and “is there anything like NCrunch for Java/Eclipse”? Stay tuned! πŸ™‚

By

Writing Maintainable Code Demands Creativity

Writing maintainable code is for “Code Monkeys”?

This morning, I read an interesting blog post from Erik McClure. The premise of the post is that writing maintainable code is sufficiently boring and frustrating as to discourage people from the programming profession. A few excerpts from the post are:

There is an endless stream of rheteric discussing how dangerous writing clever code is, and how good programmers don’t write clever code, or fast code, but maintainable code, that can’t have clever for loop statements or fancy tricks. This is wrong – good codemonkeys write maintainable code.

and

What I noticed was that it only became intolerable when I was writing horrendously boring, maintainable code (like unit tests). When I was programming things I wasn’t supposed to be programming, and solving problems in ways I’m not supposed to, my creativity had found its outlet. Programming can not only be fun, but real programming is, itself, an art, a solution to a problem that itself embodies a certain elegance. Society in general seems to be forgetting what makes us human in the wake of a digital revolution that automatizes menial tasks. Your creativity is the most valuable thing you have.

When you write a function that does things the right way, when you refactor a clever subroutine to something that conforms to standards, you are tearing the soul out of your code. Bit by bit, you forget who you are and what the code means. The standards may enforce maintainable and well-behaved code, but it does so at the cost of your individuality. Coding becomes middle-school math, where you have to solve the same, reworded problem a hundred times to get a good grade so you can go do something else that’s actually useful. It becomes a means to an end, not an adventure in and of itself.

Conformity for the Sake of Maintainability

There seem to be a few different themes here. The first one I see is one with which I have struggled myself in the past: chafing at being forced to change the way you do things to conform to a group convention. I touched on that here and here. The reason that I mention this is the references to “[conforming] to standards” and the apparent justification of those standards being that they make code “maintainable”. The realpolitik of this situation is such that it doesn’t really matter what justification is cited (appeal to maintainability, appeal to convention, appeal to anonymous authority, appeal to named authority, appeal to threat, etc). In the end, it boils down to “because I said so”. I mention this only insofar as I will dismiss this theme as not having much to do with maintainability itself. Whatever Erik was forced to do may or may not have actually had any bearing whatsoever on the maintainability of the code (i.e. “maintainability” could have been code for “I just don’t like what you did, but I should have an official sounding reason”).

Maintainable Code is Boring Code

So, on to the second theme, which is that writing maintainable code is boring. In particular Erik mentions unit tests, but I’d hazard a guess that he might also be talking about small methods, SRP classes, and other clean coding principles. And, I actually agree with him to an extent. Writing code like that is uneventful in some ways that people might not be used to.

That is, say that you don’t perform unit tests, and you write large, coupled, complex classes and methods. When you fire up your application for the first time after a few hours of coding, that’s pretty exciting. You have no idea what it’s going to do, though the smart money is on “crash badly”. But, if it doesn’t, and it runs, that’s a heady feeling and a rush — like winning $100 in the lottery. The work is also interesting because you’re going to be spending lots of time in the debugger, writing bunches of local variables down on paper to keep them straight. Keeping track of all of the strands of your code requires full concentration, and there’s a feeling of incredible accomplishment when you finally track down that needle in-a-haystack bug after an all-nighter.

On the flip side, someone who writes a lot of tests and conforms to the clean code/craftsmanship mantra has a less exciting life. If you truly practice TDD, the first time you fire up the application, you already know it’s going to work. The lottery-game excitement of longshot odds with high payoff is replaced by a dependable salary. And, as for the maddening all-nighter bugs, those too are gone. You can pretty much reproduce a problem immediately, and solve it just as quickly with an additional failing test that you make pass. The underdog, down by a lot all game, followed by miracle comeback is replaced by a game where you’re winning handily from wire to wire. All of the roller-coaster highs and lows with their panicked all nighters and miracle finishes are replaced by you coming in at 9, leaving at 5, and shipping software on time or ahead of schedule.

Making Code Maintainable Is Brainless

The third main theme that I saw was the idea that writing clever code and maintainable code is mutually exclusive, and that the latter is brainless. Presumably, this is colored to some degree by the first theme, but on its own, the implication is that maintainable code is maintainable because it is obtuse and insufficient to solve the problem. That is, instead of actually solving problems that they’re tasked with, the maintainable-focused drones oversimplify the problem and settle for meeting most, if not all of the requirements of the software.

I say this because of Erik’s vehement disagreement with the adage that roughly says “clever code is bad code”. I’ve seen this pithy expression explained in more detail by people like Uncle Bob (Robert Martin) and I know that it requires further explanation because it actually sounds discouraging and draconian stated simply. (Though, I think this is the intended, provocative effect to make the reader/listener demand an explanation). But, taken at face value I would agree with Erik. I don’t relish the thought of being paid a wage to keep quiet and do stupid things.

Maintainability Reconsidered

Let’s pull back for a moment and consider the nature of software and software development. In his post, Erik bleakly points out that software “automatizes[sic] menial tasks”. I agree with his take, but with a much more optimistic spin — I’m in the business of freeing people from drudgery. But, either way, there can be little debate that the vast majority of software development is about automating tasks — even game development, which could be said to automate pretend playground games, philosophically (kids don’t need to play “cops and robbers” on the playground using sticks and other makeshift toys when games about detectives and mobsters automate this for them).

And, as we automate tasks, what we’re doing is taking tasks that have some degree of intrinsic complexity and falling on the grenade of that complexity so that completing the task is simple, intuitive, and even pleasant (enter user experience and graphic design) for prospective users. So, as developers, we deal with complexity so that end users don’t have to. We take complicated processes and model them, and simplify them without oversimplifying them. This is at the heart of software development and it’s a task so complex that all manner of methodologies, philosophies, technologies, and frameworks have been invented in an attempt to get it right. We make the complex simple for our non-technical end users.

Back in the days when software solved simpler problems than it does now, things were pretty straightforward. There were developers and there were users. Users didn’t care what the code looked like internally, and developers generally operated alone or perhaps in small teams for any given task. In this day and age, end-users still don’t care what the code looks like, but development teams are large, sometimes enormous, and often distributed geographically, temporally, and according to specialty. You no longer have a couple of people that bang out all necessary code for a project. You have library writers, database people, application coders, graphic designers, maintenance programmers etc.

With this complexity, an interesting paradigm has emerged. End-users are further removed, and you have other, technical users as well. If you’re writing a library or an IDE plugin, your users are other programmers. If you’re writing any code, the maintenance programmer that will come along later is one of your users. If you’re an application developer, a graphic designer is one of your users. Sure, there are still end-users, but there are more stakeholders now to consider than there used to be.

In light of this development, writing code that is hard to maintain and declaring that this is just how you do things is a lot like writing a piece of code with an awful user interface and saying to your users “what do you want from me — it works, doesn’t it?” You’re correct, and you’re destined to go out of business. If I have a programmer on my team who consistently and proudly writes code that only he understands and only he can decipher, I’m hoping that he’s on another team as soon as possible. Because the fact of the matter is that anybody can write code that meets the requirements, but only a creative, intelligent person can do it in such a way that it’s quickly understood by others without compromising the correctness and performance of the solution.

Creativity, Cleverness and Maintainability

Let’s say that I’m working and I find code that sorts elements in a list using bubble sort, which is conceptually quite simple. I decide that I want to optimize, so I implement quick sort, which is more complex. One might argue that I’m being much more clever, because quick sort is a more elegant solution. But, quicksort is harder for a maintenance programmer to grasp. So, is the solution to leave bubble sort in place for maintainability? Clearly not, and if someone told Erik to do that, I understand and empathize with his bleak outlook. But then, the solution also isn’t just to slap quicksort in and call it a day either. The solution is to take the initial implementation and break it out into methods that wrap the various control structures and have descriptive names. The solution is to eliminate one method with a bunch of incrementers and decrementers in favor of several with more clearly defined scope and purpose. The solution is, in essence, to teach the maintenance programmer quicksort by making your quicksort code so obvious and so readable that even the daft could grasp it.

That is not easy to do. It requires creativity. It requires knowing the subject matter not just well enough to get it right through trial and error, not just well enough to know it cold and not just well enough to explain it to the brightest pupil, but well enough that your explanation shines through the code and is unmistakable. In other words, it requires creativity, mastery, intelligence, and clarity of thought.

And, when you do things this way, unit tests and other ‘boring’ code become less boring. They let you radically alter the internal mechanism of an algorithm without changing its correctness. They let you conduct time trials on the various components as you go to ensure that you’re not sacrificing performance. And, they document further how to use your code and clarify its purpose. They’re no longer superfluous impositions but tools in your arsenal for solving problems and excelling at what you do. With a suite of unit tests and refactored code, you’re able to go from bubble sort to quick sort knowing that you’ll get immediate feedback if something goes wrong, allowing you to focus exclusively on a slick algorithm. They’ll even allow you to go off tilting at tantalizing windmills like an unprecedented linear time sorting — hey, if the tests run in N time and all go green, you’re up for some awards and speaking engagements. For all that cleverness, you ought to get something.

So Why is “Cleverness” Bad?

What do they mean that cleverness is bad, anyway? Why say something like that? The aforementioned Bob Martin, in a video presentation I once watched, said something like “you know you’re looking at good code when someone reading it is saying ‘uh-huh’, ‘uh-huh’, ‘yep’, ‘makes sense’, ‘uh-huh'”. Contrast this with code that you see where your first reaction is “What on Earth…?!?” That is often the reaction to non-functional or incorrect code, but it is just as frequently the reaction to ‘clever’ code.

The people who believe in the idea of avoiding ‘clever’ code are talking about Rube-Goldbergian code, generally employed to work around some hole in their knowledge. This might refer to someone who defines 500 functions containing 1 through 500 if statements because he isn’t aware of the existence of a “for” loop. It may be someone who defines and heavily uses something called IndexedList because he doesn’t know what an array is. It may be this, this, this or this. I’ve seen this in the wild, where I was looking at someone’s code and I said to myself, “for someone who didn’t know what a class is, this is a fairly clever oddball attempt to replicate the concept.”

The term ‘clever’ is very much tongue-in-cheek in the context of clean coding and programming wisdom. It invariably means a quixotic, inferior solution to a problem that has already been solved and whose solution is common knowledge, or it means a needlessly complex, probably flawed way of doing something new. Generally speaking, the only person who thinks that it is actually clever, sans quotes, is the person who did it and is proud of it. If someone is truly breaking new ground, that solution won’t be described as ‘clever’ in this sense, but probably as “innovative” or “ground-breaking” or “impressive”. Avoiding ‘clever’ implementations is about avoiding pointless hacks and bad reinventions of the wheel — not avoiding using one’s brain. If I were coining the phrase, I’d probably opt for “cute” over “clever”, but I’m a little late to the party. Don’t be cute — put the considerable brainpower required for elegant problem solving to problems that are worth solving and that haven’t already been solved.

By

TDD and CodeRush

TDD as a Practice

The essence of Test Driven Development (TDD) can be summarized most succinctly as “red, green, refactor”. Following this practice will tend to make your code more reliable, cleaner, and better designed. It is no magic bullet, but you doing TDD is better at programming than you not doing it. However, a significant barrier to adoption of the practice is the (justifiable) perception that you will go slower when you start doing it. This is true in the same way that sloppy musicians get relatively proficient being sloppy, but if they really want to excel, they have to slow down, perfect their technique, and gradually speed things back up.

My point here is not to proselytize for TDD. I’m going to make the a priori assumption that testing is better than not testing and TDD is better than not TDD and leave it at that. My point here is that making TDD faster and easier would grease the skids for its broader adoption and continue practice.

My Process with MS Test

I’ve been unit testing for a long time, testing with TDD for a while, and also using CodeRush for a while. CodeRush automates all sorts of refactorings and generally makes development in Visual Studio quicker and more keyboard driven. As much as I love it and stump for it, though, I had never gotten around to using its test runner until recently.

I saw the little test tube icons next to my tests and figured, “why bother with that when I can just hit Ctrl-R, T to execute my MS Test tests in scope?” But a couple of weeks ago, I tried it on a lark, and I decided to give it a fair shake. I’m very glad that I did. The differences are subtle, but powerful, and I find myself a more efficient TDD practitioner for it.

Here is a screenshot of what I would do with the Visual Studio test runner:

I would write a unit test, and hit Ctrl-R, T, and the Visual Studio test results window would pop up at the bottom of my screen (I auto-hide almost everything because I like a lot of real estate for looking at code). For the first run, the screenshot there would be replaced with a single failing test. Then, I would note the failure, open the class and make changes, switch back to the unit test file, and run all of the tests in the class, producing the screenshot above. When I saw that this had worked, I would go back to the class and look either to refactor or for another failing test to add for fleshing out my class further.

So, to recap, write a test, Ctrl-R, T, pop-up window, dismiss window, switch to another file, make changes, switch back, Ctrl-R, T, pop-up window, dismiss pop-up window. I’m very good at this in a way that one tends to be good at things through large amounts of practice, so it didn’t seem inefficient.

If you look at the test results window too, it’s often awkward to find your tests if you run more than just a few. They appear in no particular order, so seeing particular ones involves sorting by one of the columns in the class. There tends to be a lot of noise this way.

Speeding It Up With CodeRush

Now, I have a new process, made possible by the way CodeRush shows test pass/fail:

Notice the little green checks on the left. In my new process, I write a test, hit Ctrl-T, R, and note the failure. I then switch to the other class and make it pass, at which time, I hit Ctrl-T, L (which I have bound to “repeat last test run”) before going back to the test class. As that runs, I switch to the test class and see that my test now passed (I can also easily run all tests in the file if I choose). Now, it’s time to write my next test.

So, to recap here, it’s write a test, Ctrl-T, R, switch to another file, make changes, Ctrl-T, L, switch back. I’ve completely eliminated dealing with a pop-up window and created a situation where my tests are running as I’m switching between windows (multi-tasking at its finest).

In terms of observing results, this has a distinct advantage as well. Instead of the results viewer, I can see green next to the actual code. That’s pretty powerful because it says “this code is good” rather than “this entry is good, and if you double click it, you can see which code is good”. And, if you want to see a broader view than the tests, you can see green next to the class and the namespace. Or, you can launch the CodeRush test runner and see a hierarchical view that you can drill into, rather than a list that you can awkwardly sort or filter.

Does This Really Matter?

Is shaving off a this small an amount of time worth it? I think so. I run tests dozens, if not hundreds of times per day. And, as any good programmer knows, shaving time off of your most commonly executed tasks is at the heart of optimizing a process. And, if we can shave time off of a process that people, for some reason, view as a luxury rather than a mandate, perhaps we can remove a barrier to adopting what should be considered a best practice — heck, a prerequisite for being considered a professional, as Uncle Bob would tell you.