DaedTech

Stories about Software

By

Unit Testing DateTime.Now Without Isolation

My friend Paul pointed something out to me the other day regarding my post about TDD even when you have to discard tests. I believe that this trick was taken from the book Working Effectively With Legacy Code by Michael Feathers (though I haven’t yet read this one, so I can’t be positive.

I was writing some TDD test surrounding the following production method:

public virtual void SeedWithYearsSince(DropDownList list, int year)
{
    for (int index = year; index <= DateTime.Now.Year; index++)
        list.Items.Add(new ListItem(index.ToString()));
}

and the problem I was having is that any tests that I write and check in will be good through the end of 2012 and essentially have an expiration date of Jan 1st, 2013.

What Paul pointed out is that I could refactor this to the following:

protected virtual int CurrentYear
{
    get
    {
        return DateTime.Now.Year;
    }
}

public virtual void SeedWithYearsSince(DropDownList list, int year)
{
    for (int index = year; index <= CurrentYear; index++)
          list.Items.Add(new ListItem(index.ToString()));
    
}

And, once I've done that, I can introduce the following class into my test class:

public class CalenderDropDownFillerExtension : CalendarDropdownFiller
{
    private int _currentYear;
    protected override int CurrentYear
    {
        get
        {
            return _currentYear;
        }
    }

    public CalenderDropDownFillerExtension(DateTimeFormatInfo formatInfo, int yearToUse) : base(formatInfo)
    {
        _currentYear = yearToUse;
    }
            
}

With all that in place, I can write a test that no longer expires:

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Adds_Two_Items_When_Passed_2011()
{
    var filler = new CalenderDropDownFillerExtension(new DateTimeFormatInfo(), 2012);
    var list = new DropDownList();
    filler.SeedWithYearsSince(list, 2011);

    Assert.AreEqual(2, list.Items.Count);
}

In this test, I use the new class that requires me to specify the current year. It overrides the base class, which uses DateTime.Now in favor of the "current" year I've passed it, which has nothing to do with the non-deterministic quantity "Now". As a result, I can TDD 'til the cows come home and check everything in so that nobody accuses me of having a Canadian girlfriend. In other words, I get to have my cake and eat it too.

By

Announcement for RSS Subscribers: Switching Subscription Service

I am going to be switching services for RSS and content update notifications. Up until now, I’ve been using feedburner, which is a free service that has been around since I believe 2004 or so and was bought out by google in 2007. Since that acquisition, its best feature is that it is free. This is offset somewhat by pretty flaky statistics of subscriber counts and occasional burps in performance and function. However, a big drawback is that it seems as though google might kill the service. Making matters worse is the fact that google has done exactly nothing to illuminate the issue one way or the other, playing it close to the vest. “Don’t be evil, but you can be a little shady if you want.”

Other bloggers seem to be doing the same and a feedburner bug starting yesterday telling all users that we’ve lost all of our subscribers is turning the trickle of departing feedburner users into an exodus. Well one of my philosophies in life is that I like to depend as little as possible on other people and things, especially shady people and things. So, I am going to be hopping on that bandwagon and switching to FeedBlitz effective today or tomorrow (allowing enough time for this one last post to be sent out via feedburner). The good news is that this new service apparently allows readers to subscribe to posts through other media, such as receiving a tweet or a facebook/google-plus post rather than the traditional email or RSS notification. Progress and all that.

I have already created my FeedBlitz account and the migration guide from feedburner to this new service assures me that not one subscription will be interrupted… but, if your subscription is interrupted, that’s why. I intend to have a post Sunday night or Monday morning and if you don’t see something by then and want to continue reading, you may need to resubscribe through your RSS reader. Thanks for reading and for your patience!

Update: If you are subscribed directly to the Feedburner URL for the blog or for comments, please re-subscribe using the buttons on the right. These will automatically redirect you to the new, current FeedBlitz feed and you can delete your old subscription. If you are subscribed through my site directly, you need not do anything. Feel free to email/comment if you aren’t sure.

By

Regions Are a Code Smell

There’s a blogger named Iris Classon that writes a series of posts called “stupid questions”, where she essentially posts a discussion-fueling question once per day. Recently, I noticed one there called “Should I Use Regions in my Code?”, and the discussion that ensued seemed to be one of whether regions were helpful in organizing code or whether they constituted a code smell.

For those of you who are not C# developers, a C# region is a preprocessor directive that the Visual Studio IDE (and probably others) uses to provide entry points for collapsing and hiding code. Eclipse lets you collapse methods, and so does VS, but VS also gives you this way of pre-defining larger segments of code to collapse as well. Here is a before and after look:

(As an aside, if you use CodeRush, it makes the regions look much nicer when not collapsed — see image at the bottom of the post)

I have an odd position on this. I’ve gotten used to the practice because it’s simply been the coding standard on most projects that I’ve worked in the last few years, and so I find myself doing habitually even when working on my own stuff. But, I definitely think they’re a code smell. Actually, let me refine that. I think regions are more of a code deodorant or code cologne. When we get up in the morning, we shower and put on deodorant and maybe cologne/perfume before going about our daily business (most do, anyway). And not to be gross or cynical, but the reason that we do this is that we kind of expect that we’re going to naturally gravitate toward stinking throughout the day and we’re engaging in some preventative medicine.

This is how I view regions in C# code, in a sense. Making them a coding standard or best practice of sorts is like teaching your children (developers, in the metaphor) that not bathing is fine, so long as they religiously wear cologne. So, in the coding world, you’re saying to developers, “Put in your regions first so that I don’t have to look at your unwieldy methods, haphazard organization and gigantic classes once you inevitably write them.” You’re absolving them of the responsibility for keeping their code clean by providing and, in fact, mandating a way to make it look clean without being clean.

So how do I justify my hypocrisy on this subject of using them even while thinking that they tend to be problematic? Well, at the heart of it lies my striving for Clean Code, following SRP, small classes, and above all, TDD. When you practice TDD, it’s pretty hard to write bloated classes with lots of overloads, long methods and unnecessary state. TDD puts natural pressure on your code to stay lean, compact and focused in much the same way that regions remove that same pressure. It isn’t unusual for me to write classes and region them and to have the regions with their carriage returns before and after account for 20% of the lines of code in my class. To go back to the hygiene metaphor, I’m like someone who showers quite often and doesn’t sweat, but still wears deodorant and/or cologne. I’m engaging in a preventative measure that’s largely pointless but does no harm.

In the end, I have no interest in railing against regions. I don’t think that people who make messes and use regions to hide them are going to stop making messes if you tell them to stop using regions. I also don’t think using regions is inherently problematic; it can be nice to be able to hide whole conceptual groups of methods that don’t interest you for the moment when looking at a class. But I definitely think it bears mentioning that from a maintainability perspective, regions do not make your 800 or 8000 line classes any less awful and smelly than they would be a in language without regions.

By

Changing My Personal Coding Standards

Many moons ago, I blogged about creating a DX Core plugin and admitted that one of my main motivations for doing this was to automate conversion of my code to conform to a standard that I didn’t particularly care for. One of the conversions I introduced, as explained in that post, is that I like to prepend “my” as a prefix on local, method level variables to distinguish them from method parameters (they’re already distinguished from class fields, which are pretended with an underscore). I think my reasoning here was and continues to be solid, but I also think that it’s time for me to say goodbye to this coding convention.

It will be tough to do, as I’ve been in the habit of doing this for years. But after a few weeks, or perhaps even days, I’m sure I’ll be used to the new way of doing things. But why do this? Well I was mulling over a problem in the shower the other day when the idea first took hold. Lately, I’ve been having a problem where the “my” creeps its way into method parameters, thus completely defeating the purpose of this convention. This happens because over the last couple of years, I’ve been relying ever-more heavily on the “extract method” refactoring. Between Code Rush making this very easy and convenient, the Uncle Bob, clean-coding approach of “refactor ’til you drop”, and my preference for TDD, I constantly refactor methods, and this often results in what were local variables becoming method parameters but retaining their form-describing (misleading) “my”.

What to do? My first thought was “just be diligent about renaming method parameters”, but that clearly violates my philosophy that we should make the bad impossible. My second thought was to write my own refactoring and perform some behind-the-scenes black magic with the DXCore libraries, but that seems like complete overkill (albeit a fun thing to do). My third thought bonked me in the head seemingly out of nowhere: why not just stop using “my”?

I realized that the reasons I had for using it had been slowly phased out by changes my approach to coding. I wanted to be able to tell what scope a member was instantly by looking at it, but that’s pretty easy to do regardless of what you name them when you’re writing methods that are only a few lines long. It also becomes easy to tell the scope of things when you give longer, more descriptive names to local, instead of using constants. And techniques like command query separation make it rare that you need to worry about the scope of something before you alter it since the small method’s purpose (alteration or querying) should be readily apparent. Add to that the fact that other people I collaborate with at times seem not to be a fan of this practice, and the reasons to do it have all kind of slipped away for me except for the “I’ve always done it that way” reason, which I abhor.

The lesson here for me and hopefully for anyone reading is that every now and then, it’s a good idea to examine something you do out of pure habit and decide whether that thing makes sense any longer. The fact that something was once a good idea doesn’t mean that it always will be.

By

There is No Such Thing as Waterfall

Don’t try to follow the waterfall model. That’s impossible. Instead, only try to realize the truth: there is no waterfall.

Waterfall In Practice Today

I often see people discuss, argue and debate the best approach or type of approach to software development. Most commonly, this involves a discussion of the merits of iterative and/or agile development, versus the “more traditional waterfall approach.” What you don’t see nearly as commonly, but do see every now and then is how the whole “waterfall” approach is based on a pretty fundamental misunderstanding, wherein the man (Royce) who coined the term and created the iconic diagram of the model was holding it up as a strawman to say (paraphrased) “this is how to fail at writing software — what you should do instead is iterate.” Any number of agile proponents may point out things like this, and it isn’t too hard to make the case that the waterfall development methodology is flawed and can be problematic. But, I want to make the case that it doesn’t even actually exist.

I saw a fantastic tweet earlier from Energized Work that said “Waterfall is typically 6 months of ‘fun up front’, followed by ‘steady as she goes’ eventually ramping up to ‘ramming speed’.” This is perfect because it indicates the fundamental boom and bust, masochistic cycle of this approach. There is a “requirements phase” and a “design phase”, which both amount basically to “do nothing for a while.” This is actually pretty relaxing (although frustrating for ambitious and conscientious developers, as this is usually unstructured-unstructured time). After a few weeks or months or whatever of thumb-twiddling, development starts, and life is normal for a few weeks or months while the “chuck it over the wall” deadline is too far off for there to be any sense of how late or bad the software will turn out to be. Eventually, though, everyone starts to figure out that the answers to those questions are “very” and “very”, respectively, and the project kicks into a death march state and “rams” through the deadline over budget, under-featured, and behind schedule, eventually wheezing to some completion point that miraculously staves off lawyers and lawsuits for the time being.

This is so psychically exhausting to the team that the only possible option is 3 months of doing nothing, er, excuse me, requirements and design phase for the next project, to rest. After working 60 hour weeks and weekends for a few weeks or months, the developers on the team look forward to these “phases” where they come in at 10 AM, leave at 4 PM, and sit around writing “shall” a lot, drawing on whiteboards, and googling to see if UML has reached code generation Shangri La while they were imprisoned in their cubicles for the last few months. Only after this semi-vacation are they ready to start the whole chilling saga again (at least the ones that haven’t moved on to greener pastures).

Diving into the Waterfall

So, what actually happens during these phases, in a more detailed sense, and what right have I to be so dismissive of requirements and design phases as non-work? Well, I have experienced what I’m describing firsthand on any number of occasions, and found that most of my time is spent waiting and trying to invent useful things to do (if not supporting the previous release), but I realize that anecdotal evidence is not universally compelling. What I do consider compelling is that after these weeks or months of “work” you have exactly nothing that will ever be delivered to your end-users. Oh, you’ve spent several months planning, but when was the last time planning something was worked at anywhere near as much as actually doing something? When you were a high school or college kid and given class time to make “idea webs” and “outlines” for essays, how often was that time spent diligently working, and how often was that time spent planning what to do next weekend? It wasn’t until actual essay writing time that you stopped screwing around. And, while we like to think that we’ve grown up a lot, there is a very natural tendency to regress developmentally when confronted with weeks of time after which no real deliverable is expected. For more on this, see that ambitious side project you’ve really been meaning to get back into.

But the interesting part of this isn’t that people will tend to relax instead of “plan for months” but what happens when development actually starts. Development starts when the team “exits” the “design phase” on that magical day when the system is declared “fully designed” and coding can begin. In a way, it’s like Christmas. And the way it’s like Christmas is that the effect is completely ruined in the first minute that the children tear into the presents and start making a mess. The beautiful design and requirements become obsolete the second the first developer’s first finger touches the first key to add the first character to the first line of code. It’s inevitable.

During the “coding phase”, the developers constantly go to the architect/project manager/lead and say “what about when X happens — there’s nothing in here about that.” They are then given an answer and, if anyone has time, the various SDLC documents are dutifully updated accordingly. So, developers write code and expose issues, at which time requirements and design are revisited. These small cycles, iterations, if you will, continue throughout the development phase, and on into the testing phase, at which time they become more expensive, but still happen routinely. Now, those are just small things that were omitted in spite of months of designing under the assumption of prescience — for the big ones, something called a “change request” is required. This is the same thing, but with more emails and word documents and anger because it’s a bigger alteration and thus iteration. But, in either case, things happen, requirements change, design is revisited, code is altered.

Whoah. Let’s think about that. Once the coding starts, the requirements and design artifacts are routinely changed and updated, the code changes to reflect that, and then incremental testing is (hopefully) done. That doesn’t sound like a “waterfall” at all. That sounds pretty iterative. The only thing missing is involving the stakeholders. So, when you get right down to it, “Waterfall” is just dysfunctional iterative development where the first two months (or whatever) are spent screwing around before getting to work, where iterations are undertaken and approved internally without stakeholder feedback, and where delivery is generally late and over budget, probably by an amount in the neighborhood of the amount of time spent screwing around in the beginning.

The Take-Away

My point here isn’t to try to persuade anyone to alter their software development approach, but rather to try to clarify the discussion somewhat. What we call “waterfall” is really just a specifically awkward and inefficient iterative approach (the closest thing to an exception I can think of is big government projects where it actually is possible to freeze requirements for months or years, but then again, these fail at an absolutely incredible rate and will still be subject to the “oh yeah, we never considered that situation” changes, if not the external change requests). So there isn’t “iterative approach” and “waterfall approach”, but rather “iterative approach” and “iterative approach where you procrastinate and scramble at the end.” And, I don’t know about you, but I was on the wrong side of that fence enough times as a college kid that I have no taste left for it.