DaedTech

Stories about Software

By

Customizing Generated Method Header Comments

Editorial note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at GhostDoc, the subject of this post.

Last month, I wrote a post introducing you to T4 templates.  Near the end, I included a mention of GhostDoc’s use of T4 templates in automatically generating code comments.  Today, I’d like to expand on that.

To recap very briefly, recall that Ghost Doc allows you to generate things like method header comments.  I recommend that, in most cases, you let it do its thing.  It does a good job.  But sometimes, you might have occasion to want to tweak the result.  And you can do that by making use of T4 Templates.

Documenting Chess TDD

To demonstrate, let’s revisit my trusty toy code base, Chess TDD.  Because I put this code together for instructional purposes and not to release as a product, it has no method header comments for Intellisense’s benefit.  This makes it the perfect candidate for a demonstration.

If I had released this as a library, I’d have started the documentation with the Board class.  Most of the client interaction would happen via Board, so let’s document that.  It offers you a constructor and a bunch of semantics around placing and moving pieces.  Let’s document the conceptually simple “MovePiece” method.

Read More

By

Chess TDD 62: Finishing Chess TDD

You might not have expected to read this, and I honestly wasn’t really expecting to write it, but here we are.  I’m going to call it and announce that I’m finishing Chess TDD series.  It’s been a lot of fun and gone on for a long time, and I’m not actually done with the codebase (more on that shortly).

My original intention, after finishing the initial implementation with acceptance and unit tests, was to walk through some actual games, by way of “field testing,” so to speak.  I thought this would simulate QA to some extent — at least as well as you can with a one person operation.  And, with this episode, I’ve showed a tiny taste of what that could look like.  And, I’ve realized, I could go on this way, but that would start to get pretty boring.  And, I’ve also realized that it would be irresponsible.

What I mean is that plugging laboriously through every piece on the board after every move would be showing you a “work harder, not smarter” approach that I don’t advocate.  I’d said that I would save ingesting chess games and using algebraic notation for an upcoming product, and that is true — I plan to do that.  But what I shouldn’t be doing in the interim is saving the smart work for the product and treating you to brainless, boring work in the meantime.

So with that in mind, I brought the work I was doing to a graceful close, wrapping up the feature where initial board positioning was shown to work, and using red-green-refactor to do it.

You’ll notice in the video that the Trello board is not empty by a long shot.  There’s a long list of stuff I’d like to tweak and make better as well as peripheral features that I’d like to add.  But, to use this as a metaphor for business, I have a product that (as best I can tell) successfully tells you all moves available to any piece, and that was the original charter.  It’s shippable, and, better yet, it’s covered with a robust test suite that will make it easy to build on.

What should you look for in the product?  Here are some ideas that I have, off the top (and from Trello).

  • A way to overlay algebraic chess notation for acceptance tests.
  • Remove type checking for a polymorphic scheme.
  • Improve the object graph with better responsibilities (e.g. removing En Passant logic from Board)
  • Apply static analysis tooling to address shortcomings in the code.
  • Make sure that piece movement also works properly (currently it probably wouldn’t for castling/en passant).
  • Develop a scheme for ingesting chess games and verifying that our possibilities/play match.

In short, what I have in mind is bringing this application along with the kinds of work I’d advise the teams that I train/coach and assess.  Here’s how to really make this codebase shine.

I have a couple of things to get off my plate before I productize this, but it’s not going to fall off my radar.  Stay tuned!  And, until then, here is the last of the Chess TDD posts in the format you’re accustomed to.

What I accomplish in this clip:

  • Finish the testing of the initial rows on the board.

Here are some lessons to take away:

  • The new C# language features (as of 6) are pretty great for making your code more compact and functional-appearing in nature.
  • Always, always, always make sure that you’re refactoring only when all tests are green.  I’ve experienced the pain of not heeding this advice, and it’s maddening.  This is a “measure twice, cut once” kind of scenario.
  • Clean, consistent abstractions are important for readability.  If you think of them, spend a little extra time to make sure they’re in place.
  • If something goes wrong with the continuous test runner or your test suite in general, pause and fix it when you notice it.  Don’t accept on faith that “everything should be passing.”  Like refactoring when red because “it’s just a trivial change,” this can result in worlds of downstream pain.

By

Chess TDD 60: Wrapping Initial Development

There is a bit of symmetry to this episode that may interest only me.  It is the 600th post to be published on the blog, and it is the 60th post in the ChessTDD series.  I wouldn’t have thought the series accounted for 10% of my posts, but, there it is.  Believe it or not, this post is about wrapping initial development on the project.  In other words, I have no more functionality cards to implement.  From here on in, it’s going to be constructing test scenarios and addressing any shortcomings that they reveal.  (Not ideal, but it’s hard to get user feedback in a one person show with no prod environment)

I also, after some time away have a bit more clarity on what I want to do with this going forward, so you’ll hear some mention of this as I narrate the videos.  I’m looking to wrap the youtube series itself and then to use that as the centerpiece and starting point of a video-product that I have in mind.  Stay tuned for updates down the line.

What I accomplish in this clip:

  • Get re-situated after a hiatus and clean up/reorganize old cards.
  • A few odds and ends, and laying the groundwork for the broader acceptance testing.

Here are some lessons to take away:

  • An interesting definition of done when it comes to software work goes beyond completeness and even shipping.  You can say that something is done when it has demonstrably added value somehow (it has sold or helped product revenue or something)
  • Writing unit tests is a great way to turn hypotheses that you have about the code base into productive regression test suite.  It’s also a great way to confirm or refut your understanding of the code.
  • It bears repeating over and over, but avoid programming by coincidence.  If you don’t understand why a change to your code had the effect that it had, stop what you’re doing and develop that understanding.  You cannot afford to have magic and mystery in your code.
  • There shouldn’t be any line of code in your code base that you can delete without a test turning red.  This isn’t about TDD or about code coverage — it’s about the more general idea that you should be able to justify and express the necessity of every line of code in the code base.  If removing code doesn’t break anything, then remove the code!

 

By

Chess TDD 59: King (Not) Moving into Check

This episode was relatively short and sweet.  Things actually went well, which is surprising to me somehow, particularly given the length of time between episodes.  In this episode I used previously implemented code to stop the king from being able to move into check.  I’m not positive, but off the top, this might be the last move consideration to implement.  I think my remaining cards are about testing activities and design considerations.  (Though, famous last words)

What I accomplish in this clip:

  • Disallow king from moving into check.
  • Clean a bit of cruft out of an old unit test class.

Here are some lessons to take away:

  • Tech debt can happen even in a code base well covered by tests.  It manifests itself in a variety of ways, not the least of which is making it harder than it should  be to get your bearings.  That’s on display a bit now in some of these episodes (though, I’d argue there’s no crippling debt, you can still see the effect)
  • If you see commented out code, there’s only one thing to do with it, in my opinion: delete it without a second thought.  I mean, it’s not doing any good.  If you uncomment it and it even compiles, then… good?  Do you then leave it in, even though it’s dead?  Do you, for some reason, try to use it?  I mean, what good comes from it?  And, if you’re inclined to leave it, why do that?  Isn’t that what source control is for?
  • Things like what to name stuff and when to extract constants are no exact science.  Bounce the question off of others and poll for opinions.  I suggest making it a practice not to be too uptight about such things, or programming collaboratively will wind up being a high-stress, neurotic endeavor.
  • When you get to the end of a complex chain of business rules applying to one part of the application, it may be the case that “the simplest thing that could work to get the tests all green” is actually rather complex.  This is (1) possibly unavoidable and/or (2) possibly a smell that you need to break the business logic apart.  In my case here, I’d say a little from each column.  The rules of chess are fairly complex, but it’s no secret that we’re carrying around some of the aforementioned tech debt here.

By

Chess TDD 58: (Not) Castling through Check

In this episode, I took the newly minted threat evaluator and used it to prevent castling through check.  The most interesting thing to note in this episode, however, aside from continued progress toward the final product, was how some earlier sub-optimal architectural shortcuts came back to bite us (if only temporarily).  At this point, we’re pretty close to a full implementation, but if we were building more and more functionality on top of this, it would definitely be time to pause and clean house a bit.

What I accomplish in this clip:

  • Integrated threat evaluator with castling.
  • Prevented castling through check.

Here are some lessons to take away:

  • Working sporadically on a project causes cost incursion that goes beyond just having to take time to get back up to speed.  In this case, I realized that I’ve changed the way I’m using Trello on a lot of projects, but didn’t realize I hadn’t changed it here, so I wasted some time managing the board.  Not a big deal, but definitely an example of the hidden cost of sporadically working on a project.
  • I’m personally very context-dependent when it comes to outside-in TDD (ATDD) or so-called traditional TDD.  Your mileage may vary, but the lesson to take away is that you should be able to articulate your approach, whatever it is.  Why do you do it that way?  The answer shouldn’t just be, “mmm… dunno.”
  • I made the mistake in this video of renaming some tests while I had a failing test.  As I stated in the video, the risk of this causing a problem is nearly nil, so it may seem that I’m going on about theory for theory’s sake.  But this is really an important practice — not so that you can get the “TDD Badge of Purity,” but because there are times when making this mistake leads you to introduce more failures or to introduce a different reason for failure without realizing it.  Your green test suite is like an early detection system, the way that pain tells your body something is wrong.  If you start making changes with red tests, it’s like eating a big meal after leaving the dentist’s office loaded with Novocaine.  You might start chewing through your tongue without realizing it.
  • You saw the circular dependencies thing come home to roost in this episode with a stack overflow exception.  Having circular dependencies in the code isn’t just bad for architectural purity.  It causes tangible pain.
  • You also saw me do something iffy to address the circular dependency problem.  Architectural/design concessions beget more concessions if left unchecked.  You don’t do one iffy thing and that’s the end of it.  Sooner or later, it becomes a slippery slope.