DaedTech

Stories about Software

By

Chess TDD 5: Bounded Collections of Moves

Really no housekeeping or frivolous notes for this entry.  It appears as though I’ve kind of settled into a groove with the production, IDE settings, etc.  So from here on in, it’s just a matter of me coding in 15-20 minute clips and explaining myself, notwithstanding any additional feedback, suggestions or questions.

Here’s what I accomplish in this clip:

  • Change all piece GetMovesFrom methods to use BoardCoordinate type.
  • Got rid of stupid Rook implementation of GetMovesFrom()
  • Made a design decision to have GetMovesFrom take a “board size parameter.”
  • Rook.GetMovesFrom() is now correct for arbitrary board sizes.
  • Updated Rook.GetMovesFrom() to use 1 indexing instead of 0 indexing to accommodate the problem space.
  • Removed redundant intsantiation logic in RookTest
  • Got rid of redundant looping logic in Rook.GetMovesFrom()

Here are some lessons to take away:

  • Sometimes you create a failing test and getting it to pass leads you to changing production code which, in turn, leads you back into test code.  That’s okay.
  • Sometimes you’re going to make design decisions that aren’t perfect, but that you feel constitute improvements in order to keep going.  Embrace that.  Your tests will ensure that it’s easy to change your design later, when you understand the problem better.  Just focus on constant improvements and don’t worry about perfection.
  • “Simplest to get tests passing” is a subjective heuristic to keep you moving.  If you feel comfortable writing a loop instead of a single line or something because that seems simplest to you, you have license to do that…
  • But, as happened to me, getting too clever all in one shot can lead to extended debug times that cause flow interruptions.
  • Duplication in tests is bad, even just a little.
  • It can sometimes be helpful to create constants or readonly properties for common test inputs and descriptive names.  This eliminates duplication while promoting readability of tests.

By

TDD Chess Game Part 4: Getting Organized

Alright, welcome back to this series.

A couple of housekeeping things:

  1. I have bitten the bullet and used the Visual Studio White theme along with 14 point font to record, so hopefully the videos going forward should be easier to watch. It’s a little surreal to work with, but c’est la vie.
  2. The source code is now available on github for you to follow along. The coding is usually running ahead of my publication, so if you want to see the code from a given video, you may have to grab a slightly earlier version.

Here’s what I accomplish in this clip:

  • Started using a little todo list to keep track of what I’ve done and what I need to do.
  • Cleaned up code as reported by static analysis tools.
  • Pulled some production classes into their own namespaces and out of the test classes.
  • Defined an abstract Piece class.
  • Defined a second inheritor, “Rook,” for Piece.
  • Defined a bit of dumb functionality for Rook’s “GetMovesFrom” to get it started.
  • Implemented ability for a pawn to move two spaces on its first move.
  • Defined a piece concept of “HasMoved.” (albeit just for Pawn)

And here are the lessons to take away:

  • Keeping a list of smallish things you want to change can help you keep track of what needs to be done without distracting you too much (I picked this technique up from Kent Beck’s “Test Driven Development By Example.”)
  • If you’re using NCrunch, use the green dots being dark or bright as a quick way to tell if the code is compiling.
  • Gamify cosmetic issues. If “Optimize Namespaces” and things like that are important, make violations ugly and distracting in the IDE and you’ll get annoyed and fix them whereas you probably wouldn’t bother, otherwise.
  • It’s okay to write stupid tests if you do so knowing that you’ll fix them. Finding ways to always write a test to change production code is good for practicing the TDD discipline until it starts to become second nature.
  • It’s okay to write a test that causes a non-compile failure and then needing to do a good bit of work to get everything back to compiling/passing.
  • I’ve mentioned this previously, but it bears repeating: it’s okay to reuse a test (especially a stupid one) to get a failing test.
  • If you weren’t aware of C# yield keyword and deferred execution, it’d be a good thing to familiarize yourself with.
  • Force yourself not to copy and paste as much as possible, even when it seems dumb. Feeling the pain of re-typing things will make it painfully obvious when you’re duplicating code and could do something better.

And, here’s the clip:

By

NCrunch and Continuous Testing: The Must-Have Setup

Most of this post was taken from the transcript of my Pluralsight course on NCrunch. If you are interested in watching the course but are not a Pluralsight subscriber, feel free to email me or leave a comment requesting a trial, and I’ll get you a 7 day subscription to check it out.

Understanding the Legitimate, Root-Cause Objection to TDD

In my experience, there are three basic “camps” of reactions to the concept of test driven development (TDD) from those not experienced with it: willing students, healthy skeptics and reactionary curmudgeons. The first group is basically looking for a chance to practice and needs no convincing. The last group will have to be dragged along, kicking and screaming, and so there’s no persuading them without the threat of negative consequences. It is the middle group that tends to have rational objections, some of which are well-founded and others of which aren’t so much. A lot of the negative reaction from this group is the result of reacting to the misconceptions that I mentioned in this post about what TDD is and isn’t. But even once they understand how it works, there are still some fairly common and legitimate objections that are not simply straw man arguments.

  1. The most common and prevalent objection is that coding this way means that you’re doing a lot more work. You’re taking more time and writing more code and people don’t necessarily see the benefit, especially in cases where they already know what code they want to write.
  2. Many of the misconception objections and other inexplicable resistance is really the result of people simply not knowing how to write tests or practice TDD, and perhaps at times being reluctant to admit it. Others may freely admit it. Either way, the objection is that TDD, like any other discipline, would take time to learn and require an investment of effort.
  3. There is also more code that is going into a project since you now have an additional test class for each single class you would otherwise have created. More code means more maintenance time and effort.
  4. Many astute observers also realize that a lot of legacy code, particularly that involving large-work constructors, singletons, and static state is very hard to test, making attempts to do so effort-intensive.
  5. And, along the same lines , they also realize that there would be more effort required than simply learning how to do TDD – it would also mean learning different design techniques such as dependency injection, polymorphism, and inversion of control.

When you consider all of these objections, they all have a common thread. At the core of it, they’re really all variants on the theme of not having enough time. Writing the tests, maintaining the test code, learning new ways of doing things, and applying them to new and old code are all things that take time, and for most developers, time is precious. Someone selling TDD is a lot like someone selling you on a 401K: they’re convincing you that sacrificing now is going to be worth it later and asking you to take this, to some degree, on faith.

Could TDD be better?

Justifying the adoption of TDD to a healthy skeptic hinges largely on demonstrating that it provides a net benefit in terms of time, and thus cost. So how can these objections be reconciled and the concerns addressed?

Well first up are the learning curve oriented objections. And the truth is that there’s no way around this one being a time sink. Learning how to do TDD and learning how to write testable code are going to take time, no matter what. If you do not have the time to learn, this is a perfectly valid objection, but only in the shorter term. After all, we work in an industry where change is the only constant and learning new languages, frameworks, and methodologies is pretty much table stakes for staying relevant.

Regarding development time overall, a very common argument made by TDD proponents is that the practice saves time over the long haul. This is reminiscent of the parable of the tortoise and the hare where the TDD practitioner is a tortoise plodding along, getting everything right and the hare is generating reams of code quickly but with mistakes. The hare will declare himself done more quickly, but he’ll spend a lot more time later troubleshooting, reading log files, debugging, and fixing errors. The tortoise may not finish as quickly, but when he does, he truly is done.

But what about in the short term? Is there anything that can be done to make things go more quickly in the short term for TDD practitioners? Could we strap a rocket pack to the tortoise and make him go faster than the hare while preserving his accuracy?

RocketTurtle

Speeding up the Feedback Loop

What if I told you a story? What if I told you that you could write code and know whether or not it was working nearly instantaneously? In this world of development, you don’t have to wait while your application starts up, and then navigate through various user interface screens to get to the action that will trigger the bit of code you want to verify. There is no more repetitive clicking and typing and waiting for screens to load. In fact, in this world you don’t even need to build your project or compile your code. All you need to do is type and see, as you’re typing, whether or not the changes you’re making are right. And, you can see a visual metric for how much confidence you can have in your changes by virtue of how much your code is covered by the unit tests.
Does that story sound too good to be true? Well, I’ll admit that it does sound pretty good, but I’ll let you in on a little secret – it is true. There is a name for this paradigm, and it’s called “continuous testing.” And there are various tools out there for different platforms that make it a reality, right now as we speak.

To understand the magic of continuous testing, it’s essential to understand one of the most important, but often overlooked, concepts in computer science. I’m talking about the feedback loop. At its core, programming is a series of experiments. Whenever you approach a programming task, you have a code base that does something, and you have a goal to make it do something different or new. To achieve this goal, you identify intermediate behaviors that you’d like to see to mark progress, and then you make changes that you think will result in those behaviors. Then, you run the application to see if what you thought would happen does, in fact, happen.

For example, perhaps you want to have your application display customer information stored in a database to the screen when the user clicks a certain button. You might first say “forget the database – let’s just get the button click to result in some hard-coded value being displayed,” and then set about altering the code to make that happen. When you’d made your changes to the code, you’d run the program and click that button to see what had happened.

Considered closely, this process is actually a lot like the scientific method. For step (1) you read the code. For step (2) you hypothesize what you’ll need to do to the code. For step (3) you predict the outcome of your changes, and for step (4) you make the changes and observe the results. The amount of time that it takes to perform an iteration of your coding version of the scientific method is what I’m calling the “feedback loop.” How long does it take for you to have an idea, implement it, and verify that it had the desired effect?

Scientist

In the early days of programming when the use of punch cards was common, feedback times were very lengthy. Programmers would reason carefully about everything that they did because feedback times were extremely slow, meaning mistakes were very costly. While many improvements have been made across the board to feedback times, situations persist to this day when the feedback loop is excruciatingly slow. This includes long running or resource-intensive applications and distributed systems with high latency. With such systems, programmers on projects often devise schemes to try to shorten the feedback loop, such as mocking out bottlenecks to allow fast verification of the rest of the system.

What they’re really trying to do is shorten the feedback loop to allow themselves to be more productive. When a great deal of time elapses between trying something and seeing what happens, attention tends to wander to distractions like twitter or reddit, exacerbating the inefficiency in this already-slow process. Developers innately understand this problem and are frustrated by the long build and run times of behemoth and slow-running applications.

To combat this problem, developers intuitively favor faster schemes. Ask yourself whether you prefer to work on a small project that builds quickly or a large one. How about a slow test suite versus a fast one? By speeding up the feedback loop you trade frustration and wandering attention span for engagement and a feeling of accomplishment. Techniques like relying on fast-running unit tests and keeping modules small and decoupled help a great deal with this, but we can get even faster.

If short feedback is good, immediate is definitely better. Anyone who has done extensive work at a command line or, in general used a Read-Evaluate-Print-Loop (REPL) understands this. Attention does not wander at all during a session like this. Historically, such a thing wasn’t possible in a compiled language, but with the advent of multicore systems and increasingly sophisticated compiler technology, times are changing. It is now possible to have a build running in the background of an IDE like Visual Studio even as you modify the code.

NCrunch

If you’ve been watching my series on building a chess game using TDD you couldn’t help but notice the red and green dots on the left side of the code window, since they catch the eye. What you were seeing was the tool, NCrunch, in action. Now it’s time to get properly acquainted.

NCrunch is a software product by Remco Software and was written by software developer Remco Mulder, who owns the company. It is a tool written specifically to allow developers to practice continuous testing in Visual Studio. NCrunch is a commercial product with a tiered pricing model and full-blown customer support. And it operates as a plugin to Visual Studio so there is no need to integrate or operate any kind of standalone application. It drops right in, comfortably with a tool with which you are already familiar.

For the first several years of its existence, NCrunch was free, since it was in an extended state of Beta release. During the course of these years, it grew a substantial and loyal user base. In the fall of 2012, Remco decided to issue version 1.0 and release NCrunch as a full, commercial product with a licensing model and production support. It is now on version 2.5 and is most certainly an excellent, commercial-grade product that is worth every penny.

As I write my code using this tool, you may notice things that I rarely or never do. I rarely, if ever, run an application. I rarely, if ever, use the unit test runner. I rarely even compile my code, though I do this sometimes simply because I happen to be quite accustomed to looking at compiler feedback in the errors window. Continuous testing tools like NCrunch may have been a novelty when they came out, but I would argue that they’re rapidly becoming table stakes for efficient development these days.

Before NCrunch, the viability of TDD for me was tied up in the idea that investing extra time up front meant that I wouldn’t later be revisiting my code, debugging, tweaking, fixing, when I was further removed and it’s more time consuming. With NCrunch, I don’t even need to make that case. Now, if you took TDD and NCrunch away, my development process would be substantially slower as I sat there waiting for the application to compile or the test runner to do its thing.

If you don’t have this, get it. You won’t be sorry. Forget clean code, unit testing, TDD, all of that stuff (well, not really — but indulge me here for a second). Just get this setup for the tight feedback loop alone. There is nothing like the feeling of productivity you get from typing a line of code and knowing in less than a second, without doing anything else, whether the change is what you want. That incredible power makes it all worth it — the learning curve of the tool, the cost of the tool, adopting TDD, learning to unit test. It’s like getting a car with 500 horse power and feeling that acceleration; it ruins you for anything less.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

TDD Chess Game Part 3: Stumbling and Refactoring

My apologies. I meant to be a little more regular in this series, but I stumbled a bit out of the gate, as I got into the home stretch of my next Pluralsight course. Now that the course is delivered (not released yet – in the review/edit phase), I have some more time, so I’m planning to pick this back up and go with it a little more regularly.

One interesting thing that arises out of these “fits and starts” kind of passes at it is that it mimics an actual, common development scenario: spotty maintenance coding. What I mean is, so many TDD series that you’ll watch or coding dojo/exercises in which you’ll participate have a premise that you have some fixed length of time during which to pay complete attention. But in this series, I’m sort of poking at it for 10 minutes here and 20 minutes there, very seriously mimicking an environment where you’re plugging a lot of holes, thrashing a bit and saying, “where was I and what was I dong here?”

That’s evident in this clip, probably a little too much for me really to call it polished. And as such, I don’t accomplish a ton, but here’s what I did accomplish (not necessarily in order):

  • Tied up a loose end by getting rid of the last of the primitive obsession passing of x and y coordinate ints.
  • Implemented sanity precondition checks for the input Board’s AddPiece() method, in terms of where pieces could be placed.
  • Pushed functionality for validating coordinates into the coordinate itself.
  • Eliminated duplication in the validation with a refactoring.

And, here are some lessons to take away from this, both instructional from me and by watching me make mistakes:

  • After a conceptual refactoring, such as replacing multiple primitives with a type, take a look around to make sure you cleaned up all instances of the former.
  • When you’re not really sure what to do next (i.e. “coder’s block” or “paralysis by analysis”), implement some sanity checks for preconditions/invariants. This might jolt you into some next steps as you do it.
  • Make ABSOLUTELY SURE that a test goes red when you think it should go red. Not understanding why a unit test is passing is just as bad as not understanding why it’s failing. In both cases, it means you don’t understand what your code is doing. Stop everything and get your brain in sync with the code immediately to save yourself a lot of frustration later. (See “programming by coincidence,” which I saw coined in the book “The Pragmatic Programmer” — and then, don’t do it!)
  • You’re going to make mistakes. Often dumb ones. The beauty of TDD and its fast feedback loop is to prevent them from festering and being worse later.
  • This is more of an editorial/opinion take, but I’ve more recently gravitated toward allowing my TDD to include what might be called “integration tests” (tests that exercise the interaction between two classes). As long as the test makes sense from a behavioral standpoint and provides clarity, I think it’s fine. Some, particularly those in the BDD camp, even argue that this is preferred, and that your tests should really only go through the outer API of your module/application.
  • Eliminate duplication, however trivial and however subtle. If you see repetition of any kind, you can probably extract a method. Some productivity tools and IDEs will even help you locate possible duplication.

Finally, a few notes on the video itself (and resultant code):

  • For those of you who suggested a larger font size, look for that in part 4. I apologize, but I had actually recorded the video for this already when I was taking suggestions. In the production, I did zoom to a slightly smaller area, so we’ll see if that helps any.
  • I had one commenter express a preference for a white background instead of the VS Dark theme that I use. White work-spaces give me a headache, so I darken all IDEs and things that I work in. For one person, I don’t think I’ll pull the trigger, but if more people start responding and expressing that preference, I’ll agree to suck it up and change colors.
  • The code is now on github. I’ll commit the code each time I record the video and tag it with a comment corresponding to the part of the series in question. The initial push to master just reads “Initial publish to Github” but it corresponds to the code at the end of this clip. From here forward, I’ll sync them, though if you check the repo, it’ll probably run slightly ahead of me publishing the videos because I record the audio and do these writeups after the fact.
  • Again, the higher res you view this in the better. I’d go for 1440P if you can.

By

TDD Chess Game Part 2

Alright, welcome to the inaugural video post from my blog. Due to several requests over twitter, comments, etc, I decided to use my Pluralsight recording setup to record myself actually coding for this series instead of just posting a lot of snippets of code. It was actually a good bit of fun and sort of surreal to do. I just coded for the series as I normally would, except that I turned the screen capture on while I was coding. A few days later, I watched the video and recorded narration for it as I watched it, which is why you’ll hear me sometimes struggling to recall exactly what I did.

The Pluraslight recordings are obviously a lot more polished — I tend to script those out to varying degrees and re-record until I have pretty good takes. This was a different animal; I just watched myself code and kind of narrated what I was doing, pauses, stupid mistakes, and all. My goal is that it will feel like we’re pair programming with me driving and explaining as I go: informal, conversational, etc.

Here’s what I accomplish in this particular video, not necessarily in order:

  • Eliminated magic numbers in Board class.
  • Got rid of x coordinate/y coordinate arguments in favor of an immutable type called BoardCoordinate.
  • Cleaned up a unit test with two asserts.
  • Got rid of the ‘cheating’ approach of returning a tuple of int, int.
  • Made the GetPossibleMoves method return an enumeration of moves instead of a single move.

And, here are some lessons to take away from this, both instructional from me and by watching me make mistakes:

  • Passing the same primitive/value types around your code everywhere (e.g. xCoordinate, yCoordinate) is a code smell called “primitive obsession” and it often indicates that you have something that should be a type in your domain. Don’t let this infect your code.
  • You can’t initialize properties in a non-default constructor (I looked up the whys and wherefores here after not remembering exactly why while recording audio and video).
  • Having lots of value type parameter and return values instead of domain concepts leads to constant small confusions that add up to lots of wasted time. Eliminate these as early in your design as possible to minimize this.
  • Sometimes you’ll create a failing test and then try something to make it pass that doesn’t work. This indicates that you’re not clear on what’s going on with the code, and it’s good that you’re following TDD so you catch your confusion as early as possible.
  • If you write some code to get a red test to pass, and it doesn’t work, and then you discover the problem was with your test rather than the production code, don’t leave the changes you made to the production code in there, even if the test is green. That code wasn’t necessary, and you should never have so much as a single line of code in your code base that you didn’t put in for reasons you can clearly explain. “Meh, it’s green, so whatever” is unacceptable. At every moment you should know exactly why your tests are red if they’re red, green if they’re green, or not compiling if the code doesn’t compile. If you’ve written code that you don’t understand, research it or delete it.
  • No matter how long you’ve been doing this, you’re still going to do dumb things. Accept it, and optimize your process to minimize the amount of wasted time your mistakes cause (TDD is an excellent way to do this).

So, here’s the video. Enjoy!

A couple of housekeeping notes. First, you should watch the video in full screen, 1080p, ideally (click the little “gear” icon between “CC” and “YouTube” at the bottom of the video screen . 720 will work but may be a touch blurry. Lower resolutions and you won’t see what’s going on. Second, if there’s interest, I can keep the source for this on github as I work on it. The videos will lag behind the source though (for instance, I’ve already done the coding for part 3 in the series — just not the audio or the post, yet). Drop me a comment or tweet at me or something if you’d like to see the code as it emerges also — adding it to github isn’t exactly difficult, but I won’t bother unless there’s interest.