DaedTech

Stories about Software

By

Chess TDD 57: Finished Threat Evaluator

Once again, it’s been a while since the last episode in the series.  This time the culprit has been a relocation for the remainder of the winter, which meant I was dealing with moving-like logistics.  However I’m holed up now in the south, where I’ll spend the winter working on my book, working remotely, and hopefully wrangling this Chess TDD series to a conclusion.  In this episode, I finished threat evaluator, or at least got pretty close.  I called it a wrap around the 20 minute mark and without having written additional unit or acceptance tests to satisfy myself that it works in a variety of cases.  It now correctly understands that some moves aren’t threatening (e.g. pawn moving straight ahead) and that pieces of the same color do not threaten one another.

Also, a quick editorial note.  There’s been some pretty intense wind here in Louisiana, and the power went out around the 18:00 minute mark.  I was surprised that all of the screencasting was saved without issue, but that VS barfed and filled my source file with all sorts of nulls.  I had to revert changes and re-do them by hand prior to picking up where I left off.  So it’s conceivable that a word might be spelled slightly differently or something.  I’m not pulling a fast one; just dealing with adverse circumstances.

What I accomplish in this clip:

  • Finished threat evaluator (initial implementation).
  • Won a decisive battle in the age old, man vs nature conflict.

Here are some lessons to take away:

  • Save early and often.
  • Writing a test is a good, exploratory way to get back up to speed to see where you left off with an implementation.  Write a test that needs to pass, and then, if it fails, make it pass.  If it passes, then, hey, great, because it’s a good test to have anyway.  This (writing tests to see where you left off) is not to be confused with writing a test that you expect to fail and seeing it pass.
  • Anything you can do to tighten up the automated test feedback loop is critical (like NCrunch).  The idea here is to get in the habit of using unit tests as the quickest, most efficient way to corroborate or disprove your understanding of the code.  Run experiments!
  • If, while getting a red test green, you have an idea for a more elegant refactoring, store that away until you’ve done something quick and simple to get to green.  Don’t go for the gold all at once.  Get it working, then make it elegant.

By

Chess TDD 56: Threatened Pieces

It’s been a while since my last post in this series, and for that I kind of apologize.  Normally, I’m apologizing because I’ve had too much to do and it renders the reasoning a little disjoint.  But this time, I took a break for the holidays and left off at a natural stopping point, so the disjoint-ness is kind of moot.  I apologize only that it’s been a while, but I don’t feel culpable.  Anyway, in this episode, I start to tackle the concept of check by tackling the slightly more abstract concept of “threatened pieces.”

What I accomplish in this clip:

  • Moved on to working on the concept of check.
  • Laid the foundation for a general evaluation of whether a square on the board is threatened.

Here are some lessons to take away:

  • One of the best ways to keep things moving efficiently when coding is to find ways to slice off bits of new functionality in new classes.  Getting back to TDD around a new class makes it very easy to implement functionality.  The trick is in figuring out logical seams to do this, and that comes with time and practice.
  • Classes without mutable state are a lot easier to reason about, to test, and to work with.
  • Passing booleans into methods is a smell, because usually it means that you’re passing in a control flow concern to tell the method how to behave.  In this episode, I have a boolean argument that is actually a conceptual piece of data, so it’s not problematic vis a vis the single responsibility principle, but it is, nevertheless, a smell to keep an eye on and to, perhaps, move away from later.
  • During TDD, it is fine (and even expected) to do obtuse things to get the early tests to pass, but only if each thing that you’re doing advances you toward the eventual solution and is sequentially less obtuse.  That part is critical.
  • A good trailing indicator that you can use for whether or not you’re biting off too much implementation with each test is what happens when you finish your changes to the production code.  Do all tests immediately go green, or do you have unexpected failures that cause you to need to tweak and tinker with the code that you’ve written.  Immediate green is a sign you’re in the Goldilocks zone while tinkering is a sign that you’re biting off more than you can chew.

By

Chess TDD 55: Got the Hang of Castling

In this episode, it seems I finally got the hang of castling.  It’s been a relatively long journey with it, but I can now successfully detect castling situations that involve any combination of rook/king prior movement, as well as interceding pieces.  The only thing left to go is the relative edge case of a castling through check scenario, but I’ll consider that to be part of the implementation of the check concept.

What I accomplish in this clip:

  • Re-hydrated the acceptance test that I’d left off with last time and got it passing.
  • Refactored toward a less naive implementation of blocking, and then got it passing for all cases on both sides.

Here are some lessons to take away:

  • If you’ve written a failing acceptance test and are struggling to see how to get it to pass, see if you can express the same scenario with a more granular unit test.  This can focus you and help prevent your brain from spinning.
  • If you have the feeling that you’ve implemented something before (or a teammate has), it is definitely worth pausing to investigate.  You don’t want oddball solutions to the same problem, which can be as damaging as copy and paste programming.
  • You’ll never get away from making mistakes of all sorts.  Take note how in this episode, I probably got a bit too ambitious with solving the whole problem at once, which led to a lot of staring at the screen.  I probably should have made use of more failing tests to get there more gradually.
  • When you feel that you have coder’s block and aren’t sure what to do next, that’s the best time to implement something naive and ugly.  There’s no need to be afraid, since you’ll refactor shortly, but just getting anything out there can un-stick you.

By

Chess TDD 54: Castling Acceptance Tests

The last few episodes featured heads down implementation around this new CastlingStatusChecker class.  It was nice to spend some time writing relatively simple unit tests to restore some sanity and get away from an overly-coupled Board class.  But, it was time to come up for some air and get back to proving some business/domain value.  Toward that end, I found myself writing castling acceptance tests.  But, before I could start with that, I cleaned up the constants that had grown a little awkward after my re-imagined implementations of the status checker.  All sorted out now, though.

What I accomplish in this clip:

  • Refactored constants in the castling status checker and the test class I was using to drive its implementation.
  • Starting on the castling acceptance tests, which worked for black and white pieces.

Here are some lessons to take away:

  • There’s no exact science to climbing up and down the granularity levels.  The heuristic I offer in this episode is, essentially, “prefer unit tests because there’s less setup and faster feedback, but come up for air periodically and use acceptance tests to verify that you’re adding business value.”
  • Make your code read plainly, like prose, in production.  When it comes to writing unit tests, go even an extra step, and act as if you were writing a Word document about your code (maybe not literally, but you get the idea).  The unit tests are where you explain how the system works.
  • It’s perfectly reasonable to write a series of acceptance tests that you expect to go green.  These acceptance tests are how you document and prove functionality to non-technical stakeholders if they have an appetite for reading and collaborating on them.  So, by all means, take the time to demo the app’s behavior by writing as many of these as you need to showcase what it’s doing.

By

Chess TDD 53: Castling for Black Pieces

After a couple  of episodes working on castling for white, and then an episode refactoring that implementation, it’s finally time to implement castling for black pieces.  This actually went pretty well.  The groundwork that I had laid in the previous episodes made life a lot easier for me here.  It wasn’t perfect, by the end, but it’s getting encouragingly close to “good enough to move on.”  With castling and en passant both soon to be in hand, it may be time to move to clean up mode with more elaborate acceptance scenarios.  A light at the end of the tunnel?

What I accomplish in this clip:

  • Finished basic implementation of castling checker.

Here are some lessons to take away:

  • It’s pretty rare to extract a class, find testing a lot easier on it, and then, at the end, come to regret extracting the class.  When you find yourself banging your head against the wall in a growing iceberg class, extract!
  • Refactoring to clean code makes it a lot easier to hit the ground running when you come back to your code after a while.
  • The goal of test driving a production code base is not to address all possible inputs to the thing under test.  You test drive until the implementation is done.  If you continue to write green tests after that, you’re no longer test driving.  There’s nothing wrong with this activity, but take care that you’re not adding cruft instead of value to your code base.
  • I wrote maybe 10 tests to drive the implementation of castling for white pieces, and then something like 3 tests to make it also apply to black pieces.  This is normal.  Just because I wrote 10 tests to get 50% of the functionality does NOT mean that I have to write another 10 tests to get the other 50%.