DaedTech

Stories about Software

By

How You’re Probably Misunderstanding TDD

Editorial note: I originally wrote this post for the TechTown blog.  You can check out the original here, at their site.  While you’re there, take a look around at their training courses.  

Let’s get something out of the way right up front.  You might have extensive experience with test driven development (TDD).  You might even practice it routinely and wear the red-green-refactor cadence like a comfortable work glove.  For all I know, you could be a bonafide TDD expert.

If any of that describes you, then you probably don’t actually misunderstand TDD.  Not surprisingly, people that become adept with it, live, and breathe it tend to get it.  But if that introductory paragraph doesn’t describe you, then you probably have some misconceptions.

I earn my living doing a combination of training and consulting.  This affords me the opportunity to visit a lot of shops and talk to a lot of people.  And during the course of these visits and talks, I’ve noticed an interesting phenomenon.  Ask people why they choose not to use TDD, and you rarely hear a frank, “I haven’t learned how.”

Instead, you tend to hear dismissals of the practice.  And these dismissals generally arise not from practiced familiarity, but from misunderstanding TDD.  While I can’t discount the possibility, I can say that I’ve never personally witnessed someone demonstrate an expert understanding of the practice while also dismissing its value.  Rather, they base the dismissal on misconception.

So if you’ve decided up-front that TDD isn’t for you, first be sure you’re not falling victim to one of these misunderstandings.

Read More

By

Chess TDD 62: Finishing Chess TDD

You might not have expected to read this, and I honestly wasn’t really expecting to write it, but here we are.  I’m going to call it and announce that I’m finishing Chess TDD series.  It’s been a lot of fun and gone on for a long time, and I’m not actually done with the codebase (more on that shortly).

My original intention, after finishing the initial implementation with acceptance and unit tests, was to walk through some actual games, by way of “field testing,” so to speak.  I thought this would simulate QA to some extent — at least as well as you can with a one person operation.  And, with this episode, I’ve showed a tiny taste of what that could look like.  And, I’ve realized, I could go on this way, but that would start to get pretty boring.  And, I’ve also realized that it would be irresponsible.

What I mean is that plugging laboriously through every piece on the board after every move would be showing you a “work harder, not smarter” approach that I don’t advocate.  I’d said that I would save ingesting chess games and using algebraic notation for an upcoming product, and that is true — I plan to do that.  But what I shouldn’t be doing in the interim is saving the smart work for the product and treating you to brainless, boring work in the meantime.

So with that in mind, I brought the work I was doing to a graceful close, wrapping up the feature where initial board positioning was shown to work, and using red-green-refactor to do it.

You’ll notice in the video that the Trello board is not empty by a long shot.  There’s a long list of stuff I’d like to tweak and make better as well as peripheral features that I’d like to add.  But, to use this as a metaphor for business, I have a product that (as best I can tell) successfully tells you all moves available to any piece, and that was the original charter.  It’s shippable, and, better yet, it’s covered with a robust test suite that will make it easy to build on.

What should you look for in the product?  Here are some ideas that I have, off the top (and from Trello).

  • A way to overlay algebraic chess notation for acceptance tests.
  • Remove type checking for a polymorphic scheme.
  • Improve the object graph with better responsibilities (e.g. removing En Passant logic from Board)
  • Apply static analysis tooling to address shortcomings in the code.
  • Make sure that piece movement also works properly (currently it probably wouldn’t for castling/en passant).
  • Develop a scheme for ingesting chess games and verifying that our possibilities/play match.

In short, what I have in mind is bringing this application along with the kinds of work I’d advise the teams that I train/coach and assess.  Here’s how to really make this codebase shine.

I have a couple of things to get off my plate before I productize this, but it’s not going to fall off my radar.  Stay tuned!  And, until then, here is the last of the Chess TDD posts in the format you’re accustomed to.

What I accomplish in this clip:

  • Finish the testing of the initial rows on the board.

Here are some lessons to take away:

  • The new C# language features (as of 6) are pretty great for making your code more compact and functional-appearing in nature.
  • Always, always, always make sure that you’re refactoring only when all tests are green.  I’ve experienced the pain of not heeding this advice, and it’s maddening.  This is a “measure twice, cut once” kind of scenario.
  • Clean, consistent abstractions are important for readability.  If you think of them, spend a little extra time to make sure they’re in place.
  • If something goes wrong with the continuous test runner or your test suite in general, pause and fix it when you notice it.  Don’t accept on faith that “everything should be passing.”  Like refactoring when red because “it’s just a trivial change,” this can result in worlds of downstream pain.

By

Chess TDD 61: Testing an Actual Game

Editorial Note: I was featured on another podcast this week, this one hosted by Pete Shearer.  Click here to give it a listen.  It mostly centers around the expert beginner concept and what it means in the software world.  I had fun recording it, so hopefully you’ll have fun listening.

This post is one where, in earnest, I start testing an actual game.  I don’t get as far as I might like, but the concept is there.  By the end of the episode, I have acceptance tests covering all initial white moves and positions, so that’s a good start.  And, with the test constructs I created, it won’t take long to be able to say the same about the black pieces.

I also learned that building out all moves for an entire chess game would be quite an intense task if done manually.  So, I’d be faced with the choice between recording a lot of grunt work and implementing a sophisticated game parsing scheme, which I’d now consider out of scope.  As a result, I’ll probably try to pick some other, representative scenarios and go through those so that we can wrap the series.

What I accomplish in this clip:

  • Get re-situated after a hiatus and clean up/reorganize old cards.
  • A few odds and ends, and laying the groundwork for the broader acceptance testing.

Here are some lessons to take away:

  • No matter where they occur, try to avoid doing redundant things when you’re programming.
  • If, during the course of your work, you ever find yourself bored or on “auto-pilot,” that’s a smell.  You’re probably working harder instead of smarter.  When you find yourself bored, ask yourself how you might automated or abstract what you’re doing.
  • When you’re writing acceptance tests, it’s important to keep them readable by humans.
  • A seldom-considered benefit to pairing or having someone review your coding is that you’ll be less inclined to do a lot of laborious, obtuse work.
  • Asserting things in control flow scopes can be a problem — if you’re at iteration 6 out of 8 in a while loop when things fail, it’s pretty hard to tell that when you’re running the whole test suite.

By

Chess TDD 60: Wrapping Initial Development

There is a bit of symmetry to this episode that may interest only me.  It is the 600th post to be published on the blog, and it is the 60th post in the ChessTDD series.  I wouldn’t have thought the series accounted for 10% of my posts, but, there it is.  Believe it or not, this post is about wrapping initial development on the project.  In other words, I have no more functionality cards to implement.  From here on in, it’s going to be constructing test scenarios and addressing any shortcomings that they reveal.  (Not ideal, but it’s hard to get user feedback in a one person show with no prod environment)

I also, after some time away have a bit more clarity on what I want to do with this going forward, so you’ll hear some mention of this as I narrate the videos.  I’m looking to wrap the youtube series itself and then to use that as the centerpiece and starting point of a video-product that I have in mind.  Stay tuned for updates down the line.

What I accomplish in this clip:

  • Get re-situated after a hiatus and clean up/reorganize old cards.
  • A few odds and ends, and laying the groundwork for the broader acceptance testing.

Here are some lessons to take away:

  • An interesting definition of done when it comes to software work goes beyond completeness and even shipping.  You can say that something is done when it has demonstrably added value somehow (it has sold or helped product revenue or something)
  • Writing unit tests is a great way to turn hypotheses that you have about the code base into productive regression test suite.  It’s also a great way to confirm or refut your understanding of the code.
  • It bears repeating over and over, but avoid programming by coincidence.  If you don’t understand why a change to your code had the effect that it had, stop what you’re doing and develop that understanding.  You cannot afford to have magic and mystery in your code.
  • There shouldn’t be any line of code in your code base that you can delete without a test turning red.  This isn’t about TDD or about code coverage — it’s about the more general idea that you should be able to justify and express the necessity of every line of code in the code base.  If removing code doesn’t break anything, then remove the code!

 

By

Chess TDD 59: King (Not) Moving into Check

This episode was relatively short and sweet.  Things actually went well, which is surprising to me somehow, particularly given the length of time between episodes.  In this episode I used previously implemented code to stop the king from being able to move into check.  I’m not positive, but off the top, this might be the last move consideration to implement.  I think my remaining cards are about testing activities and design considerations.  (Though, famous last words)

What I accomplish in this clip:

  • Disallow king from moving into check.
  • Clean a bit of cruft out of an old unit test class.

Here are some lessons to take away:

  • Tech debt can happen even in a code base well covered by tests.  It manifests itself in a variety of ways, not the least of which is making it harder than it should  be to get your bearings.  That’s on display a bit now in some of these episodes (though, I’d argue there’s no crippling debt, you can still see the effect)
  • If you see commented out code, there’s only one thing to do with it, in my opinion: delete it without a second thought.  I mean, it’s not doing any good.  If you uncomment it and it even compiles, then… good?  Do you then leave it in, even though it’s dead?  Do you, for some reason, try to use it?  I mean, what good comes from it?  And, if you’re inclined to leave it, why do that?  Isn’t that what source control is for?
  • Things like what to name stuff and when to extract constants are no exact science.  Bounce the question off of others and poll for opinions.  I suggest making it a practice not to be too uptight about such things, or programming collaboratively will wind up being a high-stress, neurotic endeavor.
  • When you get to the end of a complex chain of business rules applying to one part of the application, it may be the case that “the simplest thing that could work to get the tests all green” is actually rather complex.  This is (1) possibly unavoidable and/or (2) possibly a smell that you need to break the business logic apart.  In my case here, I’d say a little from each column.  The rules of chess are fairly complex, but it’s no secret that we’re carrying around some of the aforementioned tech debt here.