DaedTech

Stories about Software

By

TDD and CodeRush

TDD as a Practice

The essence of Test Driven Development (TDD) can be summarized most succinctly as “red, green, refactor”. Following this practice will tend to make your code more reliable, cleaner, and better designed. It is no magic bullet, but you doing TDD is better at programming than you not doing it. However, a significant barrier to adoption of the practice is the (justifiable) perception that you will go slower when you start doing it. This is true in the same way that sloppy musicians get relatively proficient being sloppy, but if they really want to excel, they have to slow down, perfect their technique, and gradually speed things back up.

My point here is not to proselytize for TDD. I’m going to make the a priori assumption that testing is better than not testing and TDD is better than not TDD and leave it at that. My point here is that making TDD faster and easier would grease the skids for its broader adoption and continue practice.

My Process with MS Test

I’ve been unit testing for a long time, testing with TDD for a while, and also using CodeRush for a while. CodeRush automates all sorts of refactorings and generally makes development in Visual Studio quicker and more keyboard driven. As much as I love it and stump for it, though, I had never gotten around to using its test runner until recently.

I saw the little test tube icons next to my tests and figured, “why bother with that when I can just hit Ctrl-R, T to execute my MS Test tests in scope?” But a couple of weeks ago, I tried it on a lark, and I decided to give it a fair shake. I’m very glad that I did. The differences are subtle, but powerful, and I find myself a more efficient TDD practitioner for it.

Here is a screenshot of what I would do with the Visual Studio test runner:

I would write a unit test, and hit Ctrl-R, T, and the Visual Studio test results window would pop up at the bottom of my screen (I auto-hide almost everything because I like a lot of real estate for looking at code). For the first run, the screenshot there would be replaced with a single failing test. Then, I would note the failure, open the class and make changes, switch back to the unit test file, and run all of the tests in the class, producing the screenshot above. When I saw that this had worked, I would go back to the class and look either to refactor or for another failing test to add for fleshing out my class further.

So, to recap, write a test, Ctrl-R, T, pop-up window, dismiss window, switch to another file, make changes, switch back, Ctrl-R, T, pop-up window, dismiss pop-up window. I’m very good at this in a way that one tends to be good at things through large amounts of practice, so it didn’t seem inefficient.

If you look at the test results window too, it’s often awkward to find your tests if you run more than just a few. They appear in no particular order, so seeing particular ones involves sorting by one of the columns in the class. There tends to be a lot of noise this way.

Speeding It Up With CodeRush

Now, I have a new process, made possible by the way CodeRush shows test pass/fail:

Notice the little green checks on the left. In my new process, I write a test, hit Ctrl-T, R, and note the failure. I then switch to the other class and make it pass, at which time, I hit Ctrl-T, L (which I have bound to “repeat last test run”) before going back to the test class. As that runs, I switch to the test class and see that my test now passed (I can also easily run all tests in the file if I choose). Now, it’s time to write my next test.

So, to recap here, it’s write a test, Ctrl-T, R, switch to another file, make changes, Ctrl-T, L, switch back. I’ve completely eliminated dealing with a pop-up window and created a situation where my tests are running as I’m switching between windows (multi-tasking at its finest).

In terms of observing results, this has a distinct advantage as well. Instead of the results viewer, I can see green next to the actual code. That’s pretty powerful because it says “this code is good” rather than “this entry is good, and if you double click it, you can see which code is good”. And, if you want to see a broader view than the tests, you can see green next to the class and the namespace. Or, you can launch the CodeRush test runner and see a hierarchical view that you can drill into, rather than a list that you can awkwardly sort or filter.

Does This Really Matter?

Is shaving off a this small an amount of time worth it? I think so. I run tests dozens, if not hundreds of times per day. And, as any good programmer knows, shaving time off of your most commonly executed tasks is at the heart of optimizing a process. And, if we can shave time off of a process that people, for some reason, view as a luxury rather than a mandate, perhaps we can remove a barrier to adopting what should be considered a best practice — heck, a prerequisite for being considered a professional, as Uncle Bob would tell you.

By

Poor Man’s Automoq in .NET 4

So, the other day I mentioned to a coworker that I was working on a tool that would wrap moq and provide expressive test double names. I then mentioned the tool AutoMoq, and he showed me something that he was doing. It’s very simple:

 [TestClass]
    public class FeatureResetServiceTest
    {
        #region Builders

        private static FeatureResetService BuildTarget(IFeatureLocator locator = null)
        {
            var myLocator = locator ?? new Mock().Object;
            return new FeatureResetService(myLocator);
        }

        /// If you're going out of your way to pass null instead of empty, something is wrong
        [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
        public void ResetToDefault_Throws_NullArgumentException()
        {
            var myService = BuildTarget();

            ExtendedAssert.Throws(() => myService.ResetFeaturesToDefault(null));
        }

The concept here is very simple. If you’re using a dependency injection scheme (manual or IoC container), your classes may evolve to need additional dependencies, which sucks if you’re declaring them inline in every method. This means that you need to engage in a flurry of find/replace, which is a deterrent from changing your constructor signature, even when that’s the best way to go.

This solution, while not perfect, definitely eliminates that in a lot of cases. The BuildTarget() method takes an optional parameter (hence the requirement for .NET 4) for each constructor parameter, and if said parameter is not supplied, it creates a simple mock.

There are some obvious shortcomings — if you remove a reference from your constructor, you still have to go through all of your tests and remove the extra setup code for that dependency, for instance, but this is still much better than the old fashioned way.

I’ve now adopted this practice where suitable and am finding that I like it.

(ExtendedAssert is a utility that I wrote to address what I consider shortcomings in MSTest).

By

Poor Man’s Code Contracts

What’s Wrong with Code Contracts?!?

Let me start out by saying that I really see nothing wrong with code contracts, and what I’m offering is not intended as any kind of replacement for them in the slightest. Rather, I’m offering a solution for a situation where contracts are not available to you. This might occur for any number of reasons:

  1. You don’t know how to use them and don’t have time to learn.
  2. You’re working on a legacy code base and aren’t able to retrofit wholesale or gradually.
  3. You don’t have approval to use them in a project to which you’re contributing.

Let’s just assume that one of these, or some other consideration I hadn’t thought of is true.

The Problem

If you’re coding defensively and diligent about enforcing preconditions, you probably have a lot of code like this:

public void DoSomething(Foo foo, Bar, bar)
{
  if(foo == null)
  {
    throw new ArgumentNullException("foo");
  }
  if(bar == null)
  {
    throw new ArgumentException("bar");
  }

  //Finally, get down to business...
}

With code contracts, you can compact that guard code and make things more readable:

public void DoSomething(Foo foo, Bar, bar)
{
  Contract.Requires(foo != null);
  Contract.Requires(bar != null);

  //Finally, get down to business...
}

I won’t go into much more detail here — I’ve blogged about code contracts in the past.

But, if you don’t have access to code contracts, you can achieve the same thing, with even more concise syntax.

public void DoSomething(Foo foo, Bar, bar)
{
  _Validator.VerifyParamsNonNull(foo, bar);

  //Finally, get down to business...
}

The Mechanism

This is actually pretty simple in concept, but it’s something that I’ve found myself using routinely. Here is an example of what the “Validator” class looks like in one of my code bases:

    public class InvariantValidator : IInvariantValidator
    {
        /// Verify a (reference) method parameter as non-null
        /// The parameter in question
        /// Optional message to go along with the thrown exception
        public virtual void VerifyNonNull(T argument, string message = "Invalid Argument") where T : class
        {
            if (argument == null)
            {
                throw new ArgumentNullException("argument", message);
            }
        }

        /// Verify a parameters list of objects
        /// 
        public virtual void VerifyParamsNonNull(params object[] arguments)
        {
            VerifyNonNull(arguments);

            foreach (object myParameter in arguments)
            {
                VerifyNonNull(myParameter);
            }
        }

        /// Verify that a string is not null or empty
        /// String to check
        /// Optional parameter for exception message
        public virtual void VerifyNotNullOrEmpty(string target, string message = "String cannot be null or empty.")
        {
            if (string.IsNullOrEmpty(target))
            {
                throw new InvalidOperationException(message);
            }
        }
    }

Pretty simple, huh? So simple that you might consider not bothering, except…

Except that for me, personally, anything that saves lines of code, repeat typing, and cyclomatic complexity is good. I’m very meticulous about that. Think of every place in your code base that you have an if(foo == null) throw paradigm, and add one to a cyclomatic complexity calculator. This is order O(n) on the number of methods in your code base. Contrast that to 1 in this code base. Not O(1), but actually 1.

I also find that this makes my methods substantially more readable at a glance, partitioning the method effectively into guard code and what you actually want to do. The vast majority of the time, you don’t care about the guard code, and don’t really have to think about it in this case. It doesn’t occupy your thought briefly as you figure out where the actual guts of the method are. You’re used to seeing a precondition/invariant one-liner at the start of a method, and you immediately skip it unless it’s the source of your issue, in which case you inspect it.

I find that streamlined contexting to be valuable. There’s a clear place for the guard code and a clear place for the business logic, and I’m used to seeing them separated.

Cross-Cutting Concerns

Everything I said above is true of Code Contracts as well as my knock off. Some time back, I did some research on Code Contracts and during the course of that project, we devised a way to have Code Contracts behave differently in debug mode (throwing exceptions) than in release mode (supplying sensible defaults). This was part of an experimental effort to wrap simple C# classes and create versions that the “no throw guarantee”.

But, Code Contracts work with explicit static method calls. With this interface validator, I can use an IoC container define run-time configurable, cross cutting behavior on precondition/invariant violations. That creates a powerful paradigm where, in some cases, I can throw exceptions, in other cases, I can log and throw, or in still other cases, I can do something crazy like pop up message boxes. The particulars don’t matter so much as the ability to plug in a behavior at configuration time and have it cross-cut throughout the application. (Note, this is only possible if you make your Validator an injectable dependency).

Final Thoughts

So, I thought that was worth sharing. It’s simple — perhaps so simple as to be obvious — but I’ve gotten a lot of mileage out of it in scenarios where I couldn’t use contracts, and sometimes I find myself even preferring it. There’s no learning curve, so other people don’t look at it and say things like “where do I download Code Contracts” or “what does this attribute mean?” And, it’s easy to fully customize. Of course, this does nothing about enforcing instance level invariants, and to get the rich experience of code contracts, you’d at the very least need to define some kind of method that accepted a delegate type for evaluation, but this is, again, not intended to replace contracts.

Just another tool for your arsenal, if you want it.

By

GitHub and Easy Moq

GitHub

Not too long ago, I signed up for GitHub. I had always used subversion for source control when given the choice, but the distributed nature of Git appealed to me, particularly since I often do work on a variety of different machines. GitHub itself appealed to me because it seemed like the perfect venue for reusable code that I’ve created over the years and tend to bring with me from project to project. Here is a unique idea – sites like Sourceforge want to store code by basic unit of “project” whereas GitHub wants code for the sake of code. This code need not be finished or polished or distributed on any kind of release schedule. That is perfect for a lot of my portable code.

One thing that I noticed about GitHub upon arrival was the lack of .NET work there, compared with other technologies. There is some, but not a lot. Presumably this is because sites like Code Project already offer this kind of thing to the .NET community. After reading about Phil Haack leaving Microsoft to be GitHub’s Windows Ambassador, I decided that this was a good indicator that GitHub was serious about growing that portion of its user base, and that I’d start porting some of my code there.

Another thing that I’d read about git before trying it out was that it has a steep learning curve. To back my general impression of this as a widely held opinion, I did a quick google search and ran across this post.

Given the steep learning curve many (including myself) experience with Git, I think I can safely assume that many other people have this sort of problem with it.

So, there’s at least one person who has experienced a steep learning curve and at least one other person with the same general sense as me about many finding the learning curve steep.

I must say, I haven’t found this to be the case. Perhaps it’s my comfort with Linux command line interaction or perhaps it’s my years of experience using and administering SVN, but whatever it is, I read the GitHub primer and was up and running. And, I don’t mean that I was up and running with commits to my local repository, but I was up and running committing, pushing to GitHub, and pulling that source to a second computer where I was able to build and run it. Granted, I haven’t forked/branched/merged yet, but after a day of using it, I was at the point where the source control system retreated into the mental background where it belongs, serving as a tool I’m taking for granted rather than a mental obstacle (by contrast, even with a couple of years of experience with Rational Clear Case, I still have to set aside extra development time to get it to do what I want, but I digress, since I don’t believe that’s a simple case of needling to mount a learning curve). So, after a day, I’m developing according to my schedule and running a commit when I feel like taking a break and/or a push when I want to update GitHub’s version. I’m sure I’ll be hitting google or StackOverflow here and there, but so far so good.

Easy Moq

So, what am I doing with my first GitHub project? It’s something that I’m calling Easy Moq. I use Moq as my main mocking framework, as I’ve mentioned before, and I’ve had enough time with it to find that I have a certain set of standard setup patterns. Over the course of time, I’ve refined these to the point where they become oft-repeated boilerplate, and I’ve abstracted this boilerplate into various methods and other utilities that I use. Up until now, this has varied a bit as I go, and I email it to myself or upload the classes to my web server or something like that.

But, I’m tired of this hodgepodge approach, and this seems like the perfect use case for trying out GitHub. So, I’m looking over my various projects and my use of Moq and creating my own personal Easy Moq library. If others find it useful, so much the better – please feel free to keep up with it, fork it, add to it, etc.

At the time of writing, I have three goals in mind (these may change as I TDD my way through and start eating my own dog food on real projects). In no particular order:

  • Create Mock inheritors that default to having specific test double behavior – dummies, stubs, spies, etc.
  • Be able to create and use my doubles with the semantics Dummy() or new Spy() rather than new Mock().Object
  • Be able to generate an actual class under test with all dependencies mocked as Dummy, Stub, Spy, etc.

For the first goal, I’ll refer to Niraj’s take on the taxonomy of test doubles. I haven’t yet decided exactly how I’ll define the particular doubles, but the ideas in that post are good background for my aims. Up to this point, if I want to create one of these things with Moq, I instantiate the double in the same fashion and my setup differs. I’d like to make the declaration semantics more expressive — declaring a DummyMock is more expressive as to your intent with the test than creating a Mock and doing nothing else to set it up. For more complicated doubles, this also has the pleasant effect of eliminating manual boilerplate like “SetupAllProperties()”.

For the second goal, this is probably lowest priority, but I’d like to be able to think of the doubles as entities unto themselves rather than wrappers that contain the entity that I want. It may seem like a small thing, but I’m a bit of a stickler (fanatic at times) for clear, declarative semantics.

For the third part, it’d be nice to add a dependency to a class and not have to go back and slaughter test classes with “Find and Replace” to change all calls to new Foo(bar) to new Foo(bar, baz), along with Baz’s setup overhead, particularly when those old tests, by definition, do not care about Baz. I realize that there is already a tool designed to do this (called AutoMoq), but it appears to be for use specifically with Unity. And besides, this is more about me organizing and standardizing my own existing code as well as getting to know GitHub.

So, that’s my first pet project contribution to Git Hub. If anyone gets any mileage out of it, Cheers. 🙂

By

You’re Doin’ It Wrong

I was chatting with someone the other day about some code that he had written, and he expressed that he wished he had done it another way. He said that there was a bunch of duplication and that, while he wasn’t sure what he should have done, he didn’t think it seemed quite right. I took a look at the code and was inclined to agree, though I could think of how I might have approached it.

This led me to think a little more broadly about writing code that you feel not quite right about. One might simply call this a “code smell”, but I think a distinction can be made. I personally associate code smells more with code from which you are somewhat removed, either because someone else wrote it or because you wrote it some time back. That is, you wouldn’t write code that “smelled” to you if it smelled as you were writing it (hopefully).

So, what I’m documenting here is not so much ways to look at code and “smell” that something is wrong, but rather a series of heuristics to keep in mind during development to know that you’re probably doing something wrong. Code smells are more macroscopic in nature with things like “parallel inheritance hierarchy” and “anemic domain model”. These are practices that, as I’ve learned from experience, seem to lead to bad code.

Typing the same thing again and again

The incident that made me think of this involved a piece of code that looked something like this:

public class SomeViewModel
{
    int CustomerId1 { get; set; }
    string CustomerName1 { get; set; }
    int CustomerId2 { get; set; }
    string CustomerName2 { get; set; }
    int CustomerId3 { get; set; }
    string CustomerName3 {get; set; }
    ....
    
    public void GetCustomers()
    {
        CustomerId1 = _SomeService.GetCustomer(1).Id;
        CustomerName1 = _SomeService.GetCustomer(1).Name;
        ...
     }

     ....
}

Now, there are a few things going on here that could certainly be improved upon, but the first one that jumps out to me is that if you’re appending numbers to fields or properties in class, you’re probably doing something wrong. We could certainly take our IDs and names and turn them into arrays, or lists or collections and iterate over them. But, even doing that, you’d have a bunch of parallel loops:

public class SomeViewModel
{
    List CustomerIds { get; set; }
    List CustomerNames { get; set; }
    
    public void GetCustomers()
    {
        CustomerIds = _SomeService.GetCustomers().Select(c => c.Id);
        CustomerNames = _SomeService.GetCustomers()Select(c => c.Name);
     }
}

Each place that you want to do something with customer IDs, you’ll also have to do it with names. This gets particularly ugly if you decide you want to sort the lists or something like that — now you have to make sure the lists are rearranged in order – you take on responsibility for guaranteeing that your multiple list implementation remains consistent. This is repetitive still. Better to create a customer model struct with properties name and ID or something, and expose a list (or observable collection or whatever) of that.

The heuristic I can offer here is that if you’re typing the same thing over and over again, you’re probably doing it wrong. This is an offshoot of the DRY (Don’t Repeat Yourself) principle, but ironically, I think it bears some repeating here. DRY doesn’t simply mean that you shouldn’t copy and paste code, which is what many people remember. It doesn’t even mean that you shouldn’t duplicate code. Dave Thomas calls it something “much grander”:

DRY says that every piece of system knowledge should have one authoritative, unambiguous representation. Every piece of knowledge in the development of something should have a single representation. A system’s knowledge is far broader than just its code. It refers to database schemas, test plans, the build system, even documentation.

There should be no duplication of knowledge in the project, period. As such, typing the same thing over and over again has the whiff of duplication, even if you aren’t copy/pasting. If you perform a series of operations on customer ID 1 and then customer name 1, and then customer ID 2, and then customer Name 2, etc, you’re repeating that operation over and over again, albeit with a nominally different target. But, what if in the database or lower layers of memory, I switch customers 1 and 2? The 1 operation is then done on 2 and vice versa. These operations are interchangeable – we have duplicated knowledge. And, duplicated knowledge eventually leads to things getting out of sync as the system evolves, which means maintenance headaches, code rot, and defects.

Making a private member public and/or an instance method static

Let’s say that we have the following classes:

public class Customer : INotifyPropertyChanged
{
   private int _id;
   public int Id { get { return _id; } set { _id = value; NotifyChange(() => Id); }

   private string _name;
   public string Name { get { return _name; } set { _name = value; NotifyChange(() => Name); } }

  //PropertyChange Elided
}

public class CustomerProcessor
{
    private int _processedCount;

    public void ProcessCustomers(IEnumerble customers)
    {
        if(customers != null)
        {
            customers.ToList().ForEach(cust => this.DoSomethingToCustomer(c));  
            _processedCount += customers.Count();
        }
    }

    public bool IsFinished()
    {
        return _processedCount > 12;
    }
}

Now, let’s say that the GUI is bound directly to the customer object, and that a new requirement comes in – we want to display customer stuff in black, unless most of the customers have been processed (say, 3/4s of them), in which case we want to display them in red — for all customers. Let’s also assume that we have some kind of value converter that converts a boolean to red/black for binding. Here’s how someone might accomplish this:

public class Customer : INotifyPropertyChanged
{
   private int _id;
   public int Id { get { return _id; } set { _id = value; NotifyChange(() => Id); }

   private string _name;
   public string Name { get { return _name; } set { _name = value; NotifyChange(() => Name); } 

   public bool IsRed { get { return CustomerProcessor._ProcessedCount > 9; } }
}

  //PropertyChange Elided
}

public class CustomerProcessor
{
    public static int _processedCount; //This is now a public global variable

    public void ProcessCustomers(IEnumerble customers)
    {
        if(customers != null)
        {
            customers.ToList().ForEach(cust => this.DoSomethingToCustomer(c));  
            _processedCount += customers.Count();
        }
    }

    public bool IsFinished()
    {
        return _processedCount > 12;
    }
}

Don’t do that! That might be the quickest and dirtiest way to accomplish the task, but it sure is ugly. We break encapsulation by making an internal detail of the processor public, we create a global variable to get around the issue that the customer has no reference to the processor, and we’ve only created a half-way solution because the customer still has no mechanism for knowing when the processor’s count passes the magic number 9 – we’d need to expose a global event or callback so that the customer would know when to raise property changed on its color. This is also a concrete and circular coupling – CustomerProcessor and Customer now know all about one another and might as well be one static, global mashup.

Private members are private for a reason — they’re implementation details hidden from the code base at large. By having the magic number 12 and the processed count hidden in the processor, I can later change the scheme by which the processor evaluates “finished” without breaking anything else. But, the way I’ve mangled the code here, the requirement that “red” be 3/4s of the finished count is now leaked all over the place. If I go to the processor and decide that finished count is now 400 (or better yet, configurable and read from persistence), our new requirement is no longer satisfied, and as the maintenance programmer for CustomerProcessor, I have no way of knowing that (except maybe for the hint that someone exposed my private variable).

Likewise, instance members are instance members for a reason. If I later decide that I want two separate customer processors working in two separate screens, I’m now hosed with these changes. However many processors I create, if any of them process 12 records, they’re all finished.

Far too often, the reason for making privates public or instance members static is convenience and not wanting to think through dependencies. Both of those make the information that you want easier to get at. But, I submit that if this is your solution to get your objects to talk to one another, you’re likely creating a short-sighed and misguided solution. In our example above, exposing an “IsAlmostFinished()” instance member on the processor and then having the GUI bind to something that knows about the customers and their processor is a better solution. It requires a bit more planning, but you’ll be much happier in the long run.

You can’t describe a method in a simple sentence using its parameters and return value

“Too many parameters” is considered a code smell. But, how many is too many? Recently, I stumbled across a method that looked something like this:

public void DoSomething(bool shouldPerform)
{
   if(shouldPerform)
   {
       //A whole bunch of processing
   }
}

This method doesn’t have too many parameters at all, I wouldn’t think. It has exactly one parameter, which is basically the programmatic equivalent of the ubiquitous “Are you sure you want to…” message box. But, I couldn’t for the life of me summarize it in a simple sentence, and perhaps not a paragraph. It was something like “If the user passes in true, then find the hardware processor and hardware presenter, and if each of those isn’t null, find the database manager, but if the hardware presenter is null, then create a file, and…”

If you’re writing a method, do a thought exercise. If someone stopped by and said, “whatcha doin?”, could you look up at them and quickly say “I’m writing a method that…”? Or would you say, “go away and don’t break my concentration, or I’ll never remember what this thing was supposed to do”? If it’s the latter, you’re doing it wrong.

We can only process so much information at once. If you’re reading through someone else’s code or your own code from a while ago, methods that are small and focused are easy to understand and process mentally. Methods that have dozens of parameters or do dozens of things are not. It doesn’t matter how smart you are or how good at programming you are – when you encounter something like that, you’ll put your pen to the monitor and start tracing things or take that same pen and start taking notes or drawing diagrams. If you have to do that to understand what a method does, that method sucks.

Consider the following interfaces:


public interface ICustomerDao 
{
    void DeleteCustomer(Customer customer);
    void InsertCustomer(Customer customer);
    void UpdateCustomer(Customer customer);
    Customer GetCustomerById(int customerId);
    IEnumerble GetAllCustomers();
}

public interface ICustomerService
{
    bool LogCustomerDetails(Customer customer);
    Customer MergeCustomers(Customer customer1, Customer customer2);
    Customer FindCustomer(IEnumerable customerForSearch);
}

There are no method definitions at all – just signatures. But, I’d be willing to be that you can already tell me exactly what will happen when I call those methods, and I bet you can tell me in a simple, compact sentence. Wouldn’t it be nice if, when you were coding, you only encountered methods like that? Well, if you think so, do others and your future self a favor, and only define and code them like that.

Defining class behavior for one specific client

Remember that method that I showed above? The one with the “are you sure” boolean parameter? I’m pretty sure that was neither the author’s idea of a joke nor was it quite as silly as it looked. I’d wager dollars to donuts that this is what happened:


public class Unmaintainable
{
    public bool IsSomethingTrue() { ... }

    public void WayToBig()
    {
        //....
        var myExtracted = new ExtractedFromTooBig();
        myExtracted.DoSomething(IsSomethingTrue());
    }
}

public class ExtractedFromTooBig
{
    public void DoSomething(bool shouldPerform)
    {
       if(shouldPerform)
       {
           //A whole bunch of processing
       }
    }
}

When looked at in this light, it makes sense. There was some concept that a class was too large and that concerns should be separated. Taken as a gestalt, it makes sense, but simply looking at the extracted class, “DoSomething(bool)” looks pretty dumb.

While the effort to refactor to smaller units is laudable, creating an interface that only makes sense for one client is not so good. Your classes should make sense on their own, as units. If you extract a class that seems hopelessly coupled to another one, it’s probably time to reconsider the interface, or the “seam” between these two classes. In the case above, the boolean parameter need not exist. “IsSomethingTrue()” can exist in an if condition prior to invocation of the parameterless “DoSomething()” method. In general, if your class’s public interface doesn’t seem to make sense without another class, you’re probably doing it wrong. (This is frequently seen in “helper” classes, which I mentioned in this post about name smells).