DaedTech

Stories about Software

By

Chess TDD 61: Testing an Actual Game

Editorial Note: I was featured on another podcast this week, this one hosted by Pete Shearer.  Click here to give it a listen.  It mostly centers around the expert beginner concept and what it means in the software world.  I had fun recording it, so hopefully you’ll have fun listening.

This post is one where, in earnest, I start testing an actual game.  I don’t get as far as I might like, but the concept is there.  By the end of the episode, I have acceptance tests covering all initial white moves and positions, so that’s a good start.  And, with the test constructs I created, it won’t take long to be able to say the same about the black pieces.

I also learned that building out all moves for an entire chess game would be quite an intense task if done manually.  So, I’d be faced with the choice between recording a lot of grunt work and implementing a sophisticated game parsing scheme, which I’d now consider out of scope.  As a result, I’ll probably try to pick some other, representative scenarios and go through those so that we can wrap the series.

What I accomplish in this clip:

  • Get re-situated after a hiatus and clean up/reorganize old cards.
  • A few odds and ends, and laying the groundwork for the broader acceptance testing.

Here are some lessons to take away:

  • No matter where they occur, try to avoid doing redundant things when you’re programming.
  • If, during the course of your work, you ever find yourself bored or on “auto-pilot,” that’s a smell.  You’re probably working harder instead of smarter.  When you find yourself bored, ask yourself how you might automated or abstract what you’re doing.
  • When you’re writing acceptance tests, it’s important to keep them readable by humans.
  • A seldom-considered benefit to pairing or having someone review your coding is that you’ll be less inclined to do a lot of laborious, obtuse work.
  • Asserting things in control flow scopes can be a problem — if you’re at iteration 6 out of 8 in a while loop when things fail, it’s pretty hard to tell that when you’re running the whole test suite.

By

Chess TDD 47: En Passant in the Other Direction

It’s been a while since the last episode, due mainly to my wedding and honeymoon, but here I am, back in action.  For some, there were a few audio glitches in the recording of this episode.  I did just upgrade to Windows 10, so maybe my version of Camtasia isn’t playing nicely with or something.  My apologies, but I think it’s just minorly annoying and not a problem for understanding.  This episode is pretty straightforward.  I performed a little bit of cleanup on naming and then finished up by implementing en passant in the other direction.

What I accomplish in this clip:

  • Fixed some naming hangover from last episode.
  • Got en passant working in the other direction: white pieces capturing black pieces.

Here are some lessons to take away:

  • I’ve said it more times than I can count, but naming is so important.  Spend extra time with it.  Revisit it.  Make your methods as readable as progress.
  • When you’re test driving (or, at least, making sure to write a lot of automated tests), the kinds of refactorings that a lot of people promise to do ‘later’ and never do turn out to be a lot easier.  You’re much more likely to deliver on promises to clean up later.
  • If you have logic that you want to test and it’s 2, 3 or more calls deep in private methods, this is often a sign that you should extract a class and have two different public surface areas to test.
  • If you can delete a line of code in your code base and no test goes red, you’ve failed at some point in your test driving.  Either you’ve deleted a test that should exist or else you’ve added functionality to prod without having a failing test that requires that addition.  It happens to the best of us, but understand that it’s a problem and either delete that code or add a test that makes it necessary.

By

Chess TDD 46: En Passant Expiration Date

This installment of Chess TDD was another episode focused on en passant.  It’s been a while since I was in this code base.  I’d been on the road a lot and without my main development/recording rig.  But now I’m back and was able to take the opportunity to record a little Chess TDD ahead of my wedding next week.  This episode went pretty smoothly.  I cleaned up a method from last time that had become unwieldy and then got the next acceptance test to pass without any flailing.  Not bad for so much time between episodes!

What I accomplish in this clip:

  • Refactored the Board.MovePiece method away from mounting ickiness.
  • Implemented functionality to prevent en passant from lasting beyond one turn.

Here are some lessons to take away:

  • When extracting methods during a refactor, if you have a hard time giving the new method a coherent name, it might be a sign that you’ve selected non-cohesive functionality to extract.
  • Spend a few extra brain cycles contemplating the names that you give things.  This investment will pay off because it’s easier to give things good names when their functionality is fresh in your mind.
  • If you start to have code that no longer seems to belong in a class, make sure you’re keeping that visible to yourself as you go so that you can have a nice cue for refactoring.
  • Take care to keep communication between methods and classes at an appropriate level of abstraction.  Make sure they’re communicating in obvious domain concepts.

By

ChessTDD 37: Cleaning Up and Implementing Rook

Got back in the saddle with this episode, and had a shorter recording that basically went well.  Someone pointed out in the comments that I had reversed king and queen position for the black pieces, so I started off by fixing that, and then moved on to fixing a bug and finally implementing the Rook acceptance tests.  Wrapped it up in relatively short order, too.

What I accomplish in these clips:

  • Fixed my template for the full chess board.
  • Fixed the vertical version of that reverse bug from last time.
  • Implemented rook acceptance tests.

Here are some lessons to take away:

  • It’s important to understand past shortcomings of your code so that you can form good hypotheses about where you might have other potential weak spots.
  • If you’re going to run an experiment on your code, do it from a test that you’re writing instead of through the GUI or the debugger or something.  You’re running the experiment anyway, so you may as well capture the results in the form of an automated regression test.
  • A lack of test coverage for a line of code isn’t a problem in and of itself.  Coverage is a metric and not a goal.  The problem with having code not covered by tests is that there’s nothing to prove that the code was implemented in a thoughtful, deliberate way.  If you don’t have a test expressing what a path through the code is supposed to do, there’s no way to know if you’ve reasoned about that code.
  • Copying, pasting, and adjusting often winds up taking longer than typing by hand.  You spend more time staring dumbly at the screen trying to figure out what’s wrong than it would have taken you to hand type the code you need.