DaedTech

Stories about Software

By

ChessTDD 27: Parameterized Acceptance Tests and Scenarios

At this point, I’m not going to bother with excuses any longer and I’ll just apologize for the gaps in between these posts.  But, I will continue to stick with it, even if it takes a year; I rarely get to program in C# anymore, so it’s fun when I do.  In this episode, I got a bit more sophisticated in the use of SpecFlow and actually got the acceptance tests into what appears to be reasonable shape moving forward.  I’m drawing my knowledge of Specflow from this Pluralsight course.  It’s a big help.

Here’s what I accomplish in this clip:

  • Implemented a new SpecFlow feature.
  • Discovered that the different SpecFlow features can refer to the same C# class.
  • Implemented parameterized spec flow components and then scenarios.
  • Got rid of the “hello world-ish” feature that I had created to get up to speed in favor of one that’s a lot more useful.

Here are some lessons to take away:

  • When using new tools, you’re going to flounder a bit before the aha! moments.  That’s okay and it happens to everyone.
  • As your understanding of tooling and your environment evolves, be sure to evolve with it.  Don’t remain rigid.
  • Fight the urge to copy and paste.  It’s always a fight myself, but on the occasion that I don’t have a better solution right in the moment, getting a test green, than duplicating code, I force myself to feel the pain and re-type it.  This helps me remember I’m failing and that I need to make corrections.
  • When I got my bearings in SpecFlow and realized that I had things from the example that I didn’t actually need, I deleted them forthwith.  You can always add stuff back later, but don’t leave things you don’t need laying around.
  • Notice how much emphasis I placed throughout the clip on getting back to green as frequently as possible.  I could have parameterized everything wholesale in a broad refactoring, but I broke it up, checking for green as frequently as possible.
  • Sometimes sanity checks are necessary, especially when you don’t really know what you’re doing.  I was not clear on why 8 was succeeding, since I was querying for a piece off the board.  Just to make sure that I wasn’t messing up the setup and it was never even checking for 8, I added that exception in temporarily.  Use those types of checks to make sure you’re not tilting at windmills.  As I’ve said, getting red when you expect red is important.
  • No matter what you’re doing, always look to be refactoring and consolidating to keep your code tight.

By

ChessTDD 26: At Last, Acceptance Tests

Let’s get the excuses out of the way: busy, more work coming in, vacation, yada-yada.  Sorry for the delay since last time.  This episode is a short one just to get back on track.  I haven’t been writing much code lately in general, and I was already stumbling blindly through SpecFlow adpotion, so this was kind of just to get my sea legs back.  I wanted to get more than one acceptance test going and start to get a feel for how to divide the acceptance testing effort up among “features” going forward.

Here’s what I accomplish in this clip:

  • Cleaned up the dead code oops from last time.
  • Defined an actual feature file for spec flow, with four real acceptance tests.

Here are some lessons to take away:

  • I don’t really know how Specflow works very well, so that floundering around after pasting in the same text seemed to fix the problem is my way of avoiding programming by coincidence.
  • No matter what you’re doing (unit tests, production code, or, in this case, acceptance tests) you should always strive to keep your code as clean and conformant to the SRP as you can.
  • When you don’t know a tool, go extra slowly with things like rename operations and whatnot.  I’m referring to me changing the name of the spec flow files.  It’s easy to get off the rails when you don’t know what’s going on, so make sure you’re building and your tests are green frequently.

By

Cutting Down on Code Telepathy

Let’s say that you have some public facing method as part of an API:

public void Process(CustomerOrder order)
{
    //Whatever
}

CustomerOrder is something that you don’t control but that you do have to use. Life is good, but then let’s say that a requirement comes in saying that orders can now be post-dated, so you need to modify your API somewhat, to something like this:

public void Process(CustomerOrder order, DateTime orderDate)
{
    if(orderDate < DateTime.Now)
        throw new ArgumentException("orderDate");

    //Whatever
}

Great, but that was really painful because you learn that publishing changes to your public API is a real hassle for both yourself and for your users. After a lot of elbow grease and grumbling at the breaking change, though, things are stable once again. At least until a stakeholder with a lot of clout comes along and demands that it be possible to process orders through that method while noting that the order is actually a gift. You kick and scream, but to no avail. It has to go out and it has to hurt, and you're powerless to stop it. Grumbling, you write the following code, trying at least to sneak it in as a non-breaking change:

public void Process(CustomerOrder order, DateTime orderDate, bool isGift = false)
{
    if (orderDate < DateTime.Now)
        throw new ArgumentException("orderDate");
}

But then you start reading and realize that life isn't that simple and that you're probably going to break your clients anyway. Fed up, you decide that you're going to prevent yourself ever from being bothered by this again. You'll write the API that stands the test of time:

public void Process(CustomerOrder order, Dictionary options)
{
    if(((DateTime)options["orderDate"]) < DateTime.Now)
        throw new ArgumentException("options");
}

Now, this can never be wrong. CustomerOrder can't be touched, and the options dictionary can support any extensions that are requested of you from here forward. If changes need to be made, you can make them internally without publishing painful changes to the API. You have, fortunately, separated your concerns enough that you can simply deploy a new DLL that handles order processing, and any new values supplied by your clients can be handled. No more API changes -- just a quick update, some testing, and an explanatory Word document sent to your client explaining how to use the thing. Here's the first one:

public class ProcessingClient
{
    private OrderProcessor _orderProcessor = new OrderProcessor();
    public void SubmitAnOrder(CustomerOrder order)
    {
        var options = new Dictionary();
        options["orderDate"] = DateTime.Now;
        options["isGift"] = true;
        _orderProcessor.Process(order, options);
    }
}

There. A flexible API and the whole "is gift" thing neatly handled. If they specify that it's a gift, you handle that. If they specify that it isn't or just don't add that option at all, then you treat those equally as the default case. Important stakeholder satisfied, and you won't be bothered with nasty publications. So, all good, right?

Flexibility, but at what cost?

I'm guessing that, at a visceral level, your reaction to this sequence of events is probably to cringe a little, even if you're not sure why. Maybe it's the clunky use of a collection type instead of something slicker. Maybe it's the (original) passing of a Boolean to the method. Perhaps it's simply to demand to know why CustomerOrder is inviolate or why we couldn't work to an order interface or at least define an inheritor. Maybe "options" reminds you of ViewState.

But, whatever it is, doesn't defining a system boundary that doesn't need to change seem like a worthwhile goal? Doesn't it make sense to etch painful boundaries in stone so that all parties can rely on them without painful integration? And if you're going to go that route, doesn't it make sense to build in as much flexibility as possible so that all parties can continue to innovate?

Well, that brings me to the thing that makes me wince about this approach. I'm not a fan of shying away from the pain of "icky publish/integration" instead of going with "if it hurts, do it more and get better at it." That shying away doesn't make me wince in and of itself, but it does seem like the wrong turn at a fork in the road to what does make me wince, which is the irony of this 'flexible' approach. The idea in doing it this way is essentially to say, "okay, publishing sucks, so let's lock down the integration point so that all parties can work independently, but let's also make sure that we're future proof so we can add functionality later." Or, tl;dr, "minimize multi-party integration coordination with hyper-flexible API."

So where's the irony? Well, how about the fact that any new runtime-bound additions to "options" require an insane amount of coordination between the parties? You're now more coupled than ever! For instance, let's say that we want to add a "gift wrap" option. How does that go? Well, first I would have to implement the functionality in the code. Then, I'd have to test and deploy my changes to the server, but that's only the beginning. From there, I need to inform you what magic string to use, and probably to publish a Word document with an example, since it's easy to get this wrong. Then, once you have that document, I have to go through my logs and troubleshoot to discover that, "oh yeah, see that -- you're passing us 'shouldGiftwrap' when it should really be 'shouldGiftWrap' with a capital W." And if I ever change it, by accident or on purpose? You'll keep compiling and running, and everything will be normal except that, from your perspective, gift wrapping will just quietly stop working. How much pain have we saved in the end with this non-discoverable, counter-intuitive, but 'flexible' setup? Wouldn't it be better not to get cute and just make publishing a more routine, friction-free experience?

The take-away that I'd offer here is to consider something about your code and your software that you may not previously have considered. It's relatively easy to check your code for simple defects and even to write it in such a way to minimize things like duplication and code churn. We're good at figuring out how not to have to keep doing the same thing over and over as well and to simplify. Those are all good practices. But the new thing I'd ask you to consider is "how much out of band knowledge does this require between parties?"

It could be a simple scenario like this, with a public facing API. Or, maybe it's an internal integration point between your team and another team. But maybe it's even just the interaction surface between two modules, or even classes, within your code base. Do both parties need to understand something that's not part of the method signatures and general interaction between these entities? Are you passing around magic numbers? Are you relying on the same implicit assumptions in both places? Are there things you're communicating through a means other than the actual interactions or else just not communicating at all? If so, I suggest you do a mental exercise to ask yourself what would be required to eliminate that out of band communication. Otherwise, today's clever ideas become tomorrow's maintenance nightmares.

By

ChessTDD 24: Cleaning Up for Acceptance Testing

It’s been a little while since I posted to this series, largely because of the holiday time.  I’ve wrapped up some things that I’d been working on and am hoping here in the next few weeks to have some time to knock out several episodes of this, so look for an elevated cadence (fingers crossed).  To get back into the swing of things in this episode, I’m going to pull back a bit from the acceptance test writing and clean up some residual cruft so that when I resume writing the acceptance tests in earnest, I’m reacquainted with the code base and working with cleaner code.

Here’s what I accomplish in this clip:

  • Refactored awkward test setup to push board population to production code.
  • Got rid of casting in acceptance tests (and thus prepared for a better implementation as we start back up).

Here are some lessons to take away:

  • Refactoring is an interesting exercise in the TDD world when you’re moving common functionality from tests to production.  Others’ mileage may vary, but I consider this to be a refactoring, so even moving this from test code to production code, I try to stay green as I go.
  • I made a mistake in moving on during a refactoring when my dots went dark green.  Turns out they were green for the project I was in even while another project and thus the solution were not compiling.  It’s good to be mindful of gotchas like this so that you’re not refactoring, thinking that everything is fine, when in reality you’re not building/passing.  This is exactly the problem with non-TDD development — you’re throwing a lot of code around without verification that what you’re doing isn’t creating problems.
  • It’s not worth agonizing over the perfect implementation.  If what you’re doing is better than what existed before, you’re adding value.
  • If you’re working experimentally and trying things out and your tests stay green for a while when you’re doing this, make sure you can get to red.  Take a second and break something as a sanity check.  I can’t tell you how frustrating it is to be working for a while and assume that everything’s good, only to learn that you’d somehow, inadvertently made the tests necessarily always pass for some trivial reason.

By

Chess TDD 23: Yak-Shaving with SpecFlow

I’m writing out this paragraph describing what I’m planning to do before recording myself doing anything. This post has been a while in coming because I’ve waffled between whether I should just hand write acceptance/end-to-end tests (what I generally do) or whether I should force you to watch me fumble with a new tool. I’ve opted for the latter, though, if it gets counter-productive, I may scrap that route and go back to what I was doing. But, hopefully this goes well.

Having fleshed out a good bit of the domain model for my Chess move calculator implementation, the plan is to start throwing some actual chess scenarios at it in the form of acceptance tests. The purpose of doing this now is two-fold: (1) drive out additional issues that I may not be hitting with my comparably straightforward unit test cases and (2) start proving in English that this thing works. So, here goes.

Here’s what I accomplish in this clip:

  • Installed SpecFlow plugin to Visual Studio
  • Set SpecFlow up for the ChessTDD project.
  • Wrote a simple acceptance test.

Here are some lessons to take away:

  • There is a camp that favor ATDD (acceptance test driven development) or “outside-in TDD” and another that prefers “traditional” or “inside-out TDD.”  I’m not really in either, but you might find that you have a preference.  Both approaches are worth understanding.
  • What I’m doing right now, for the moment, is not test driven development.  I’m retrofitting acceptance tests to bolster my coverage and see if the more micro-work I’ve done stands up to real world usage.
  • If you want to see how someone that doesn’t know SpecFlow gets it set up, this is the video for you.
  • If you’ve been tinkering with a test for a while and the test was green the whole time, how do you know you’re done?  With your assert in place as you think it should look, modify it quickly to something that should fail and confirm that it does.  The idea here is to make sure that your modifications to the test have actually had an effect.