DaedTech

Stories about Software

By

There’s No Excuse

If you have a long, error-prone process, automate it. This is the essential underpinning of what we all do for a living and informs our day to day routines. Whether we’re making innovative new consumer applications, games, line of business applications, or anything else, we look at other people’s lives and say, “we can make that automatic.” “Don’t use a pen and paper for your todo list when there’s GTasks.” “Don’t play risk with a board and dice when you can do it online (or play an Elder Scrolls game instead).” “Don’t key all that nonsense in by hand.”

And yet, far too often we fail to automate our own lives and work. I wonder if there’s some kind of cognitive blindspot we have. When other people mindlessly perform laborious tasks, we jump in and point out how dumb what they’re doing is and how we can sell them something that will rock their world. But when we mindlessly perform laborious tasks (data entry, copy and paste programming, etc) we don’t think twice or, if we do, we assure ourselves that it’s too complicated or not worth automating or something.

I was going through some old documentation the other day to look up what was needed to add another instance of a feature to a legacy application. What it boiled to was something along the lines of “we have an application that sells computers and we want to add a new kind of PCI card to the configurable options.” Remarkably, this involved hand-adding things to all sorts of different meta data tables in a database, along with updating some stored procedures. The application, generally speaking, was smart enough to do the rest without code changes, even including GUI updates, so that was a win. But hand-adding things to tables was… annoying.

If only there were something that would let people add things to the database without using the query explorer tool and doing it by hand… something where you could write instructions in some kind of “language,” if you will, and translate these instructions into something visual and easy to understand so that people could add meta-data more simply. Hmmm.

Wait, I’ve got it! How about instead of a document explaining how to add a bunch of records to the database, you write an administrative function into your GUI that automates it? What a victory! You get the meta-data added and your new PCI option, and you also remove the possibility that you’ll mangle or forget one of the entries and leave the system in a poor state. This is really the only option. It’s not an either-or kind of situation. Hand-adding things to your database is facepalm.

What’s the moral of this story? To me, it’s this: if you have a giant document detailing manual steps for programmers to follow to get something done, what you really have is a spec/user story for your next development cycle. Automate all the things, and then burn those documents at a cathartic, gleeful camp fire. You can turn your onerous processes into roasted marshmallows.

By

Cleaning Up Your Build

Today, I’d like to make a post that’s relatively short and to the point. And it’s relatively short and to the point because what I’m going to talk about is so basic and fundamental that it won’t take long to say it. It is absolutely critical that you have nothing standing between the stuff checked into your project’s source control and a completely successful build and deployment of your software.

Here’s the first thing that should absolutely be true. Someone brand new to the team should be able to take a computer with nothing special installed on it, point it at the latest good version of your project in source control, get the latest code, build it in the IDE, and be able to run your software when the build is done. If when this person runs the software, weird things happen or it crashes, you’ve failed. If this person can’t even successfully build the software, you’ve failed badly. If this person can’t even compile the software, things are really ugly. If this person can’t get the software out of source control, you’re probably using Rational Clear Case, and that poor person coming to the team has no idea what’s coming for the next months and years. Not being able to get things out of source control without jumping through hoops is total fail.

It is absolutely critical that right away, on the first day, someone can get your software from source control, build it and run it without issues. If this isn’t possible right now, make a project plan/user story/whatever for making it be possible in the future and give this a high priority. Think of yourself as a restaurant that has severe health code violations. Sure you could ignore them and continue cranking out your signature dish, “Spaghetti with Botulism,” but it’s not advisable. You need to have a clean and sanitary work environment, free from nasty cruft and residue that’s worked its way into being a normal part of your process.

Once you’ve got a “source control to runtime” path that’s pristine, you need to make sure this is also the case for deployment. You should be able to fire up a clean target machine or VM, deploy your deliverables to it in some automated fashion, and have it work. What you shouldn’t need to do is install MS Word on there or remember to copy over those six license files and that .trx thing in that one directory. Oh yeah, and a registry setting or something.

As soon as you’re doing stuff like that, you have a polluted build because you have a point of failure. You have your “automated” process and then you have the thing that you just have to remember to do every time or things go badly. If any of your process is manual, you WILL mess it up and cause problems. We’re humans and its inevitable. This is especially true if you aren’t agile and deployments happen only rarely. If you can, eliminate it as a step, but if you can’t, then automate it. Deploying should be dead simple.

And something else to bear in mind is that past sins aren’t forgiven. In other words, if you have a deployment process now that’s simple and one click and works every time with something like XCopy, that doesn’t mean you’re out of the woods. The “on a clean machine” requirement is critical. If you’re XCopying over existing files to deploy, you might have some weird one-off thing that you did to the server 2 years ago and have forgotten all about. You need to make sure you can nuke your whole deployment, redo it, and have it work.

If it sounds as though I’m being a hardliner or extremist, perhaps that’s the case, but I think it’s justifiable on this subject. You can’t negotiate with cargo cult build processes. They have to be eliminated because there is absolutely no upside and absolutely pure downside. Think about your own source control, build and deployment processes and ask yourself if there are things that need to be weeded out. And you know what? Just take a crack at it. I’ve done this sort of thing myself on a lot of different projects and I’ve always found it’s never as hard to fix the problems as you think it’s going to be. Usually it’s just that no one thinks to try.

By

Intro to Unit Testing 10: The Business Value of Unit Tests

Backstory

I worked for a consulting firm for a while. We didn’t make anything particularly exciting — line of business applications and the like was about the extent of it. The billing model for clients was dead simple and resembled the way that lawyers charge; consultants had an hourly rate and we kept diligent track of our time, to the nearest quarter hour. There was a certain feel-good element to this oversimplification of knowledge work in the same way that it’s pleasant to lean back, watch Superman defeat Lex Luthor and delight in a PG world where Good v Evil grudge matches always end with Good coming out the victor.

It’s pleasant to think that writing software has the predictable, low-thought cadence of an activity like chopping wood where each 15 minutes spent produces a fairly constant amount of value to the recipient of the labor. (Cue background song, Lou Reed, “A Perfect Day“) Chop for 15 minutes, collect $3, hand over X chopped logs. Chop for 1 hour, collect $12, hand over 4X chopped logs. Write software for 15 minutes, produce a working, 15 LOC application for $25. Write software for 1 hour, produce a working, 60 LOC application for $100. Oh, such a perfect day.

When I started at the company, I asked some people if they wrote unit tests. The answer was generally ‘no’ and the justification for this was that you’d have to run it by the client and the client most likely wouldn’t want to pay for you to write unit tests. What they meant by this was that since we billed in quarter hour increments and supplied invoices with detailed logs of all activity, it’d be sort of hard to sneak in 15 minutes of writing automated test code. Presumably, the fear was that the client would say, “what’s this ‘unit testing’ stuff and why did you do it when you didn’t say anything about it.” I say “presumably” because this wasn’t the reason people didn’t unit test at this company just like whatever excuse they have at your company isn’t the real reason for not unit testing there. The real reason is usually not knowing how to do it.

Why did I start out with this anecdote and its centerpiece of the quarter hour billing and development cadence? Well, simply because software development is a creative exercise and far too spastic to flow along smoothly in a low viscosity stream of lines of code per minute. You may sit and stare blankly at a computer screen while contemplating design for half an hour, code for 4 minutes, stare blankly again for an hour, code for 20 minutes, and then finish the product. So, 24 minutes — is that a billable half hour or 15 minutes? Closer to 30, I suppose. Do you count the blank staring? On the one hand, this was the real work — the knowledge work — in a way that the typing certainly wasn’t. So should you bill 1.5 hours instead and just count the typing as a brainless exercise? Or should you bill 2 hours because the work is a gestalt? I personally think that the answer is obvious and the gestalt billing model cuts right to the notion that software development is a holistic exercise that involves delivering a working product, and the breakdown may include typing, thinking, white-boarding, searching Stack Overflow, debugging, squinting at a GUI, talking to another developer, going for a 5 minute walk for perspective, running a static analysis tool, tracking down a compiler warning, copying 422 files to a target directory, and yes, my friend, unit testing.

Those are all things that you do as part of writing good software. And, in a consulting paradigm, you wouldn’t cut one of them out and say, “the client wouldn’t want to pay for that,” because the client doesn’t know what its talking about when you’re under the software-writing hood — that’s why they’re paying you. They wouldn’t want to pay for “searching Stack Overflow” or “Squinting at the GUI” either, but you don’t refuse to do those things when you’re writing software. And so refusing to unit test for this same reason is a cop-out. When a younger developer at that firm asked me why I wrote unit tests and how I accounted for them in my billing, this was essentially the argument I gave — I asked him how he accounted for the time he spent compiling, debugging, and running the application and, bright guy that he is, he understood what I was saying immediately.

Core Business Value

This may seem like a roundabout and long introduction to this chapter, but it really cuts to the core of the business value proposition for unit tests. During development, why do developers compile, run, and debug? Well, they do it to see if their code is doing what they think it should do. Write some code, then make sure it’s doing what you expect. So why write unit tests? To make sure your code is doing what you expect, and to make sure it keeps doing what you expect via automation. The core business value of unit tests is that they serve as progress markers, sign-posts, and guard rails on the road to an application that does what you expect.

Unit tested applications are more predictable and better documented than their non-unit tested counter parts (assuming the same amount of API documentation and commenting are done), and there is an enormous amount of business value in predictability and clarity of intent. With a good unit test suite, you’ll know in minutes if you’ve introduced a regression bug. Without that unit test suite, when will you know? When you run the application GUI? When QA runs it a week later? When the customer runs it a month later? When something randomly goes haywire a year later? Each one of these delays becomes exponentially more expensive.

That’s not the only value-add from a business perspective (and I’ll list some other ones next), but it’s the main one, as far as I’m concerned. It also explains why the notion that you need to carve out some extra time for unit tests and figure out whether the customer wants them or not is preposterous. Do you think the customer is going to get angry if you explain that part of your development process is to execute the code you just wrote to make sure it doesn’t crash? If the answer to that is “no, of course not, that’s ridiculous” then you also have the answer to whether or not a customer would care if you happened to automate that process.

Of course, one thing to bear in mind is that a customer may not want to pay for you to learn on the job to unit test, and that’s a fair point. But if the customer (or your company, internally, if you aren’t a consultant) doesn’t want to foot the bill for this, then you should strongly consider picking it up on your own and then switching customer/company if they don’t buy in to something as fundamental as automating predictability. Unit tests are the software equivalent of accountants practicing double entry bookkeeping, doctors washing their hands, electricians turning power back on before leaving and plumbers doing the same with the water. Imagine if your plumber sweat welded a joint for your new shower, sized it up and then said, “meh, I’m sure it’s fine” and left without ever running the water. That’s what tens of thousands of us do every day when we just assume some piece of code works because it worked a month ago and you don’t remember touching it since then. Ship it? Meh, sure, whatever — it’s probably fine. The business value of unit tests is a stronger assurance that we know what’s going on than “meh, sure, whatever.”

Ancillary Business Value

Here are some other ways in which unit tests add value to the business beyond confirming that the application behaves as expected.

First, unit tests tend to serve nicely as documentation. This may sound strange at first; you’re probably thinking, “how is a bunch of code documentation when we have a whole activity associated with documenting our code?” Well, fact of the matter is that documentation in the form of writeups, code comments, instruction manuals, etc tends to get out of date as the product ages. Unit tests, however, are never out of date because if they were, the build would fail (or at least you’d see red when you ran them) and you’d be forced to go back and “fix the documentation.” If you keep your unit test methods clean and give them good names as described in earlier chapters of this series, they’ll also read more like a book than like code, and they’ll document purpose and intended behavior of the system.

Unit tests also guard against regression. When you write the tests, you’re confirming that the software does what you expect it to at that moment. But what about later? Maybe later you forget what you intended in that moment or decide that you intended something different and you change the code. Will it still work? In a lot of legacy code bases, the answer to that question is, “yikes — who knows?” With a thoughtfully unit tested code base, you can rig it so that a test goes red if a design assumption that you made is no longer true. For instance, say you write some method with the intention that it never return null, and say that eventually you and your teammates build on this method and its assumed post-condition, grabbing values returned by the method and never checking for null before dereferencing them. If someone later modifies that method and adds a condition in which it returns null, the only thing standing between them and introducing a regression bug is a unit test that fails if that method returns null.

The practice of automated unit testing has after the fact benefits such as documentation and guards against regression bugs, but it also helps during the course of development by having a positive impact on your design. I’ve long been a fan of and have previously linked to this excellent talk by Michael Feathers called “The Deep Synergy Between Testability and Good Design.” The general idea is that writing code with the knowledge that you’re going to be writing tests for it (or practicing TDD) leads you to write small, factored classes and methods that are loosely coupled and that this practice, in turn, creates flexible and maintainable code. Or, consider the converse and think of how hard it is to write unit tests for giant, procedural methods and classes. Unit tests make it harder to do things that make your code awful.

Lastly, I’ll throw in a benefit that summarizes my take on this entire subject and really drives things home. It’s a huge bit of editorializing, but I feel somewhat entitled to do so in my own conclusion. I believe that a serious piece of value added by unit testing is that it lends you or your group legitimacy and credibility. In this day and age, the question, “should you unit test your code” is basically considered to be settled case law in the industry. So the question, to a large extent, boils down to whether you write tests or whether you have excuses, legitimate or otherwise (and there are legitimate ones, such as “I don’t know how, yet.”) Don’t be in the camp that has excuses.

Forget justifying what you or your organization has done up to this point, and imagine yourself as a customer of software development. You’ve got a budget, and you’re looking to have some software written that you don’t have time, yourself, to write. All other things being equal, which group do you hire? Do you hire a group that responds to “do you unit test” with “no, we don’t think our customers would want that?” How about a group that responds with “well, there’s this database and this GUI and sometimes there’s hardware, so we really can’t?” Or do you hire a group that responds with, “we sure do, would you like to see some samples?” I bet it’s the last one, if you’re honest with yourself.

So be that last group. Add value to your users and your business. Write good software and consider your design carefully, and, just as importantly, automate the process of ensuring your software does what you think it does. Your credibility and the credibility of your software is at stake.

By

Intro to Unit Testing 9: Tips and Tricks

This, I’ve decided, is going to be the penultimate post in this series. I had originally targeted a 9 part series, finishing up with a “tips and tricks” post, the way you often see with technical books. But I think I’m going to add a 10th post to the series where I make the business case for unit testing. That seems like a more compelling wrap up to the series. So, stay tuned for one more after this.

This post is going to be a collection of tips and tricks for new and veteran unit testers.

Structure the test names for readability beyond just descriptive names.

The first step in making your unit tests true representation of business statements is to give them nice, descriptive names. Some people do this with “given, when, then” verbiage and others are simply verbose and descriptive. I like mine to read like conversational statements, and this creates almost comically long names like “GetCustomerAddress_Returns_Null_When_Customer_Is_Partially_Registered.” I’ve been given flack for things like this during my career and you might get some as well, but, do you know what? No one is ever going to ask you what this test is checking. And if it fails, no one is ever going to say “I wonder why the unit test suite is failing.” They’re going to know that it’s failing because GetCustomerAddress is returning something other than null for a partially registered customer.

When you’re writing unit tests, it’s easy to gloss over names of the tests the way you might with methods in your production code. And, while I would never advocate glossing over naming anywhere, you especially don’t want to do it with unit tests because unit test names are going to wind up in a report generated by your build machine or your IDE where as production methods won’t unless you’re using a static analysis tool with reporting, like NDepend.

But it goes beyond simply giving tests descriptive names. Come up with a good scheme for having your tests be as readable as possible in these reports. This is a scheme from Phil Haack that makes a lot of sense.
I adopted a variant of it after reading the post, and have been using it to eliminate duplication in the names of my unit tests. This consideration of where the test names will be read, how, and by whom is important. I’m not being more specific here simply because how you do this exactly will depend on your language, IDE, testing framework, build technology etc. But the message is the same regardless: make sure that you name your tests in such a way to maximize the value for those who are reading reports of the test names and results.

Create an instance field or property called “Target”

This one took a while to grow on me, but it eventually did and it did big time. Take a look at the code below, originally from a series I did on TDD:

[TestClass]
public class BowlingTest
{
	[TestClass]
	public class Constructor
	{

		[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
		public void Initializes_Score_To_Zero()
		{
			var scoreCalculator = new BowlingScoreCalculator();

			Assert.AreEqual(0, scoreCalculator.Score);
		}
	}

	[TestClass]
	public class BowlFrame
	{
		private static BowlingScoreCalculator Target { get; set; }

		[TestInitialize()]
		public void BeforeEachTest()
		{
			Target = new BowlingScoreCalculator();
		}

		[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
		public void With_Throws_0_And_1_Results_In_Score_1()
		{
			var frame = new Frame(0, 1);
			Target.BowlFrame(frame);

			Assert.AreEqual(frame.FirstThrow + frame.SecondThrow, Target.Score);
		}

		[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
		public void With_Throws_2_And_3_Results_In_Score_5()
		{
			var frame = new Frame(2, 3);
			Target.BowlFrame(frame);

			Assert.AreEqual(frame.FirstThrow + frame.SecondThrow, Target.Score);
		}

		[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
		public void Sets_Score_To_2_After_2_Frames_With_Score_Of_1_Each()
		{
			var frame = new Frame(1, 0);
			Target.BowlFrame(frame);
			Target.BowlFrame(frame);

			Assert.AreEqual(frame.Total + frame.Total, Target.Score);
		}

		[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
		public void Sets_Score_To_Twenty_After_Spare_Then_Five_Then_Zero()
		{
			var firstFrame = new Frame(9, 1);
			var secondFrame = new Frame(5, 0);

			Target.BowlFrame(firstFrame);
			Target.BowlFrame(secondFrame);

			Assert.AreEqual(20, Target.Score);
		}

		[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
		public void Sets_Score_To_25_After_Strike_Then_Five_Five()
		{
			var firstFrame = new Frame(10, 0);
			var secondFrame = new Frame(6, 4);

			Target.BowlFrame(firstFrame);
			Target.BowlFrame(secondFrame);

			Assert.AreEqual(30, Target.Score);
		}
	}

	public class BowlingScoreCalculator
	{
		private readonly Frame[] _frames = new Frame[10];

		private int _currentFrame;

		private Frame LastFrame { get { return _frames[_currentFrame - 1]; } }

		public int Score { get; private set; }

		public void BowlFrame(Frame frame)
		{
			AddMarkBonuses(frame);

			Score += frame.Total;
			_frames[_currentFrame++] = frame;
		}

		private void AddMarkBonuses(Frame frame)
		{
			if (WasLastFrameAStrike()) Score += frame.Total;
			else if (WasLastFrameASpare()) Score += frame.FirstThrow;
		}

		private bool WasLastFrameAStrike()
		{
			return _currentFrame > 0 && LastFrame.IsStrike;
		}
		private bool WasLastFrameASpare()
		{
			return _currentFrame > 0 && LastFrame.IsSpare;
		}
	}
}

If you look at the nested test class corresponding to the BowlFrame method, you’ll notice that I have a class level property called Target and that I have a method called “BeforeEachTest” that runs at the start of each test and instantiates Target. I used to be more of a purist in wanting all unit test methods to be completely and utterly self-contained, but after a while, I couldn’t deny the readability of this approach.

Using “Target” cuts out at least one line of pointless (and repetitive) instantiation inside each test and it also unifies the naming of the thing you’re testing. In other words, throughout the entire test class, interaction with the class under test is extremely obvious. Another ancillary benefit to this approach is that if you need to change the instantiation logic by, say, adding a constructor parameter, you do it one place only and you don’t have to go limping all over your test class, doing it everywhere.

I highly recommend that you consider adopting this convention for your tests.

Use the test initialize (and tear-down) for intuitive naming and semantics.

Along these same lines, I recommend giving some consideration to test initialization and tear-down, if necessary. I name these methods BeforeEachTest and AfterEachTest for the sake of clarity. In the previous section, I talked about this for instantiating target, but this is also a good place to instantiate other common dependencies such as mock objects or to build friendly instances that you pass to constructors and methods.

This approach also creates a unified and symmetric feel in your test classes and that kind of predictability tends to be invaluable. People often debug production code, but are far more likely to initially contemplate unit tests by inspecting them, so predictability here is as important as it is anywhere.

Keep Your Mind on AAA

AAA. “Arrange, Act, Assert.” Think of your unit tests in these terms at all times and you’ll do well (I once referred to this as “setup, poke, verify.“). The basic anatomy of a unit test is that you setup the world that you’re testing to exist in some situation that matters to you, then you do something, then you verify that what you did produced the result you expect. A real world equivalent might be that you put a metal rod in the freezer for 2 hours (arrange), take it out and stick your tongue on it (act), and verify that you can’t remove your tongue from it (assert).

If you don’t think of your tests this way, they tend to meander a lot. You’ll do things like run through lists of arguments checking for exceptions or calling a rambling series of methods to make sure “nothing bad happens.” This is the unit test equivalent of babbling, and you don’t want to do that. Each test should have some specific, detailed arrangement, some easily describable action, and some clear assertion.

Keep your instantiation logic in one place

In a previous section, I suggested using the test runner’s initialize method to do this, but the important thing is that you do it, somehow. I have lived the pain of having to do find and replace or other, more manual corrections when modifying constructors for classes that I was instantiating in every unit test for dozens or even hundreds of tests.

Your unit test code is no different than production code in that duplication is your enemy. If you’re instantiating your class under test again and again and again, you’re going to suffer when you need to change the instantiation logic or else you’re going to avoid changing it to avoid suffering, and altering your design to make copy and paste programming less painful is like treating an infected cut by drinking alcohol until you black out and forget about your infected cut.

Don’t be exhaustive (you don’t need to test all inputs — just interesting ones)

One thing I’ve seen occasionally with people new to unit testing and especially getting their heads around TDD is a desire to start exhaustively unit testing for all inputs. For instance, let’s say you’re implementing a prime number finder as I described in my Pluralsight course on NCrunch and continuous testing. At what point have you written enough tests for prime finder? Is it when you’ve tested all inputs, 1 through 10? 1 through 100? All 32 bit integers?

I strongly advise against the desire to do any of these things or even to write some test that iterates through a series of values in a loop testing for them. Instead, write as many tests as you need to tease out the algorithm if you’re following TDD and, in the end, have as many tests as you need to cover interesting cases that you can think of. For me, off the top (TDD notwithstanding), I might pick a handful of primes to test and a handful of composite numbers. So, maybe one small prime and composite and one large one of each that I looked up on the internet somewhere. There are other interesting values as well, such as negative numbers, one, and zero. I’d make sure it behaved correctly for each of these cases and then move on.

It might take some practice to fight the feeling that this is insufficient coverage, but you have to think less in terms of the set of all possible inputs and more in terms of the set of paths through your code. Test out corner cases, oddball conditions, potential “off by one” situations, and maybe one or two standard sorts of inputs. And remember, if later some kind of bug or deficiency is discovered, you can always add more test cases to your test suite. Your test suite is an asset, but it’s also code that must be maintained. Don’t overdo it — test as much as necessary to make your intentions clear and guard against regressions, but not more than is needed.

Use a continuous testing tool like NCrunch

If you want to see just how powerful a continuous testing tool is, check out that Pluralsight video I did. Continuous testing is game changer. If you aren’t familiar with continuous testing, you can read about it at the NCrunch website. The gist of it is that you get live, real-time feedback as to whether your unit tests are passing as you type.

Let that sink in for a minute: no running a unit test runner, no executing the tests in the IDE, and not even any building of the code. As you type, from one character to the next, the unit tests execute constantly and give you instantaneous feedback as to whether you’re breaking things or not. So, if you wander into your production code and delete a line, you should expect that you’ll suddenly see red on your screen because you’re breaking things (assuming you don’t have useless lines of code).

I cannot overstate how much this will improve your efficiency. You will never go back once you get used to this.

Unit Tests Instead of the Console

Use unit tests instead of the console or whatever else you might use to do experimentation (get comfortable with the format of the tests). Most developers have some quick way of doing experiments — scratchpads, if you will. If you make yours a unit test project, you’ll get used to having unit tests as your primary feedback mechanism.

In the simplest sense, this is practice with the unit test paradigm, and that never hurts. In a more philosophical sense, you’re starting to think of your code as a series of entry points that you can use for inspection and determining how things interact. And that’s the real, longer term value — an understanding that good design involves seams in the code and unit tests let you access those seems.

Get familiar with all of the keyboard shortcuts

Again, this is going to vary based on your environment, but make sure to learn whatever keyboard shortcuts and things your IDE offers to speed up test management and execution. The faster you are with the tests, the more frequently you’ll execute them, the more you’ll rely on them, and the more you’ll practice with them.

Your unit test suite should be a handy, well-worn tool and a comfortable safety blanket. It should feel right and be convenient and accessible. So anything you can do to wear it in, so to speak, will expedite this feeling. Take the time to learn these shortcuts and practice them — you won’t regret it. Even if you have a continuous testing tool, you can benefit from learning the most efficient way to use it. Improvement is always possible.

General Advice

Cliche as it may sound and be, the best tip I can give you overall is to practice, practice, practice. Testing will be annoying and awkward at first, but it will become increasingly second nature as you practice. I know that it can be hard to get into or easy to leave by the wayside when the chips are down and you’re starting at a deadline, but the practice will mitigate both considerations. You’ll grow less frustrated and watch the barriers to entry get lower and, as you get good, you won’t feel any inclination to stop unit testing when the stakes are high. In fact, with enough practice, that’s when you’ll feel it’s most important. You will get there with enough effort — I promise.

By

Lessons in Good Naming through Absurdity

There’s something I’ve found myself stressing a lot lately to people on my team. It’s a thing that probably seems to most like nitpicking, but I think is one of the most, if not the most, under-stressed aspects of programming. I’m talking about picking good names for things.

I’ve seen that people often give methods, variables and classes abbreviated names. This has roots in times where saving characters actually mattered. Methods in C had names like strcat (concatenate a string), and that seems to have carried forward to modern languages with modern resource situations. The reader of the method is left to try to piece together what the abbreviation means, like the recipient of a text message from someone who thinks that teenager text-speak is cute.

There are other naming issues that occur as well. Ambiguity is a common one, where methods have names like “OnClick” or even “DoStuff.” You’ll also have methods that are occasionally misleading — a method called “ReadRecords” that reads in some records and then actually updates them as well. Giving this a simple name like “ReadAndUpdateRecords” would take care of this, but people don’t do it. There are other examples as well.

All of this probably seems like nitpicking, as I said a moment ago, but I contend that it isn’t. Code is read way, way more often than it is written/modified, and it’s usually read by people who don’t really understand what was going through the coder’s mind at the time of writing. (This can even include the original coder, revisiting his own code weeks or months later.) Anything that furthers understanding becomes important for saving minutes or even hours spent trying to understand. Methods with names that accurately advertise what they do save the maintenance programmer from needing to examine the implementation to see how it works. When there is a standard like this through the entirety of the code base, the amount of time saved by not having to study implementations is huge.

Toward the end of achieving this goal, one idea I had was a “naming” audit. This activity would consist of the team assembling for an hour or so, perhaps over pizza one lunch or evening, and going through the code base looking at all of the names of methods, variables, and classes. Any names that weren’t accurate or sufficiently descriptive would be changed by the group to something that was clear to all. I think the ROI on this approach would be surprisingly high.

But if you can’t do that, maybe a more distributed approach would work — one that combines the best elements of shaming and good-natured ribbing, like having the person who breaks the build buy lunch. So maybe any time you encounter a poorly named method in the code, you rename it to something ridiculous to underscore the naming problem that preceded it. You bring this to the method author’s attention and demand a better name. Imagine seeing your methods renamed to things like:

  • ShockAndAwe()
  • Blammo(int x)
  • Beat(int deadHorse)
  • Onoes(string z)
  • UseTheForceLuke()

I’m not entirely sure if I’m serious about this or not, but it would make for an interesting experiment. Absurd names kind of underscores the “dude, what is this supposed to mean” point that I’m making, and it’s a strategy other than endless nagging, which isn’t really my style. I’m not sure if I’ll try this out, but please feel free to do so yourself and, if you do, let me know how it goes.