DaedTech

Stories about Software

By

Module Boundaries and Demeter

I was doing a code review recently, and I saw something like this:

public class SomeService
{
    public void Update(Customer customer)
    {
        //Do update stuff
    }

    public void Delete(int customerId)
    {
        //Do delete stuff
    }
}

What would you say if you saw code like this? Do you see any problem in the vein of consistent abstraction or API writing? It’s subtle, but it’s there (at least as far as I’m concerned).

The problem that I had with this was the mixed abstraction. Why do you pass a Customer object to Update and an integer to Delete? That’s fairly confusing until you look at the names of the variables. The method bodies are elided because they shouldn’t matter, but to understand the reason for the mixed abstraction you’d need to examine them. You’d need to see that the Update method uses all of the fields of the customer object to construct a SQL query and that the corresponding Delete method needs only an ID for its SQL query. But if you need to examine the methods of a class to understand the API, that’s not a good abstraction.

A better abstraction would be one that had a series of methods that all had the same level of specificity. That is, you’d have some kind of “Get” method that would return a Customer or a collection of Customers and then a series of mutator methods that would take a Customer or Customers as arguments. In other words, the methods of this class would all be of the form “get me a customer” or “do something to this customer.”

The only problem with this code review was that I had just explained the Law of Demeter to the person whose code I was reviewing. So this code:

public void DeleteCustomer(int customerId)
{
    string theSqlQuery = "DELETE FROM Customer WHERE CustomerId = " + customerId;
    //Do some sql stuff...
}

was preferable to this:

public void DeleteCustomer(Customer customer)
{
    string theSqlQuery = "DELETE FROM Customer WHERE CustomerId = " + customer.Id;
    //Do some sql stuff...
}

The reason is that you don’t want to accept an object as a method parameter if all you do with it is use one of its properties. You’re better off just asking for that property directly rather than taking a needless dependency on the containing object. So was I a hypocrite (or perhaps just indecisive)?

Well, the short answer is “yes.” I gave a general piece of advice one week and then gave another piece of advice that contradicted it the next. I didn’t do this, however, because of caprice. I did it because pithy phrases and rules fail to capture the nuance of architectural decisions. In this case the Law of Demeter is at odds with providing a consistent abstraction. And, I value the consistent abstraction more highly, particularly across a public seam between modules.

What I mean is, if SomeService were an implementation of a public interface called ICustomerService, what you’d have is a description of some methods that manipulate Customer. How do they do it? Who knows… not your problem. Is the customer in a database? Memory? A file? A web service? Again, as consumers of the API we don’t know and don’t care. So because we don’t know where and how the customers are stored, what sense would it make if the API demanded an integer ID? I mean, what if some implementations use a long? What if Customers are identified elsewhere by SSN for deletion purposes? The only way to be consistent across module boundaries (and thus generalities) is to deal exclusively in domain object concepts.

The Law of Demeter is called the Principle of Least Knowledge. At its (over) simplest, it is a dot counting exercise to see if you’re taking more dependencies than is strictly necessary. This can usually be enforced by asking yourself if your methods are using any objects that they could get by without using. However, in the case of public facing APIs and module boundaries, we have to relax the standard. Sure, the SQL Server version of this method may not need to know about the Customer, but what about any scheme for deleting customers? A narrow application of the Law of Demeter would have you throw Customer away, but you’d be missing out by doing this. The real question to ask in this situation is not “what is the minimum that I need to know” but rather “what is the minimum that a general implementation of what I’m doing might need to know.”

By

How to Keep Method Size Under Control

Do you ever open a source code file and see a method that starts at the top of your screen and kind of oozes its way to the bottom with no end in sight? When you find yourself in that situation, imagine that you’re reading a ticker tape and try to guess at where the method actually ends. Is it a foot below the monitor? Three feet? Does it plummet through the floor and into the basement, perhaps down past the water table and into the earth’s mantle?

TickerMonitor

Visualized like this, I think everyone might agree that there’s some point at which the drop is too far, though there’s likely some disagreement on where exactly this is. Personally, I used to subscribe to the “fits on a screen” heuristic and would only start looking to pull out methods if it got beyond that. But in more recent years, I think even smaller. How small? I dunno–five or six lines, max. Small enough that you’ll only ever see one try-catch or control flow statement in there. Yeah, seriously, that small. If you’re thinking it sounds kind of crazy, I get that, but give it a try for a while. I can almost guarantee that you’ll lose your patience for looking at methods that cause you to think, “wait, where was loopCounter declared again–before the second or third while loop?”

If you accept the premise that this is a good way to do things or that it might at least be worth a try, the first thing you’ll probably wonder is how to go about doing this from a practical standpoint. I’ve definitely encountered people and even whole groups who considered method sizes like this to be impractical. The first thing you have to do is let go of the notion that classes are in some kind of limited supply and you have to be careful not to use too many. Same with modules, if your project gets big enough. The reason I say this is that having small methods means that you’re going to have a lot of them. This in turn means that they’re going to need to be spread to multiple classes, and those classes will occupy more namespaces and modules. But that’s okay. If you encounter a large application that’s well designed and factored, it’s that way because the application is actually a series of small, focused components working together. Monolithic doesn’t scale well.

Getting Down to Business

If you’ve prepared yourself for the reality of needing more classes organized into more namespaces and modules, you’ve really overcome the biggest obstacle to being a small-method coder. Now it’s just a question of mechanics and practice. And this is actually important–it’s not sufficient to just say, “I’m going to write a lot of methods by stopping at the fifth line, no matter what.” I guarantee you that this is going to create a lot of weird cross-coupling, unnecessary state, and ugly things like out parameters. Nobody wants that. So it’s time to look to the art of creating abstractions.

As a brief digression, I’ve recently picked up a copy of Uncle Bob Martin’s Clean Code: A Handbook of Agile Software Craftsmanship and been tearing my way through it pretty quickly. I’d already seen most of the Clean Coder video series, which covers some similar ground, but the book is both a good review and a source of new and different information. To be blunt, if you’re ever going to invest thirty or forty bucks in getting better at your craft, this is the thing to buy. It’s opinionated, sometimes controversial, incredibly specific, and absolute mandatory reading. It will change your outlook on writing code and make you better at what you do, even if you don’t agree with every single point in it (though I don’t find much with which to take issue, personally).

The reason I mention this book and series is that there is an entire section in the book about functions/methods, and two of its fundamental points are that (1) functions should do one thing and one thing only, and (2) that functions should have one level of abstraction. To keep those methods under control, this is a great place to start. I’d like to dive a little deeper, however, because “do one thing” and “one level of abstraction per function” are general instructions. It may seem a bit like hand-waving without examples and more concrete heuristics.

Extract Finer-Grained Details

What Uncle Bob is saying about mixed abstractions can be demonstrated in this code snippet:

public void OpenTheDoor()
{
    GrabTheDoorKnob();
    TwistTheDoorKnob();
    TightenYourBiceps();
    BendYourElbow();
    KeepYourForearmStraight();
}

Do you see what the issue is? We have a method here that describes (via sub-methods that are not pictured) how to open a door. The first two calls talk in terms of actions between you and the door, but the next three calls suddenly dive into the specifics of how to pull the door open in terms of actions taken by your muscles, joints, tendons, etc. These are two different layers of abstractions: one about a person interacting with his or her surroundings and the other detailing the mechanics of body movement. To make it consistent, we could get more detailed in the first two actions in terms of extending arms and tightening fingers. But we’re trying to keep methods small and focused, so what we really want is to do this:

public void OpenTheDoor()
{
    GrabTheDoorKnob();
    TwistTheDoorKnob();
    PullOpenTheDoor();
}

private static void PullOpenTheDoor()
{
    TightenYourBiceps();
    BendYourElbow();
    KeepYourForeArmStraight();
}

Create Coarser Grained Categories

What about a different problem? Let’s say that you have a method that’s long, but it isn’t because you are mixing abstraction levels:

public void CookQuesadilla()
{
    ChopOninons();
    ShredCheese();

    GetOutThePan();
    AddOilToPan();
    TurnOnTheStove();

    SprinkleOnionsAndCheeseOnTortilla();
    PutTortillaInPan();
    CookUntilFirm();
    FoldTortillaAndCookUntilBrown();
    FlipTortillaAndCookUntilBrown();
    RemoveCookedQuesadilla();

    RemovePanFromStove();
    ScrubPanWithBrush();
    ServeQuesadillas();
}

These items are all at the same level of abstraction, but there are an awful lot of them. In the previous example, we were able to tighten up the method by making the abstraction levels consistent, but here we’re going to actually need to add a layer of abstraction. This winds up looking a little better:

public void CookQuesadilla()
{
    PrepareIngredients();
    PrepareEquipment();
    PerformActualCooking();
    FinishUp();
}

private static void PrepareIngredients()
{
    ChopOninons();
    ShredCheese();
}
private static void PrepareEquipment()
{
    GetOutThePan();
    AddOilToPan();
    TurnOnTheStove();
}
private static void PerformActualCooking()
{
    SprinkleOnionsAndCheeseOnTortilla();
    PutTortillaInPan();
    CookUntilFirm();
    FoldTortillaAndCookUntilBrown();
    FlipTortillaAndCookUntilBrown();
    RemoveCookedQuesadilla();
}
private static void FinishUp()
{
    RemovePanFromStove();
    ScrubPanWithBrush();
    ServeQuesadillas();
}

In essence, we’ve created categories and put the actions from the long method into them. What we’ve really done here is create (or add to) a tree-like structure of methods. The public method is the root, and it had thirteen children. We gave it instead four children, and each of those children has between two and five children of its own. To tighten up methods, it’s perfectly viable to add “nodes” to the “tree” of your call stack. While “do one thing” is still a little elusive, this seems to be carrying us in that direction. There’s no individual method that you look at and think, “boy, that’s a lot of stuff going on.” Certainly its a matter of some art and taste, but this is probably a good way to think of it–organize stuff into hierarchical method categories until you look at each method and think, “I could probably memorize what that does if I needed to.”

Recognize that Control Flow Uses Up an Abstraction

So far we’ve been conceptually figuring out how to organize families of methods into well-balanced tree structures, and that’s taken us through some pretty mundane code. This code has involved none of the usual stuff that sends apps careening off the rails into bug land, such as conditionals, loops, assignment, etc. Let’s correct that. Looking at the code above, think of how you’d modify this to allow for the preparation of an arbitrary number of quesadillas. Would it be this?

public void CookQuesadillas(int numberOfQuesadillas)
{
    PrepareIngredients();
    PrepareEquipment();
    for(int i = 0; i < numberOfQuesadillas; i++)
        PerformActualCooking();
    FinishUp();
}

Well, that makes sense, right? Just like the last version, this is something you could read conversationally while in the kitchen just as easily as you do in the code. Prep your ingredients, then prep your equipment, then for some integer index equal to zero and less than the number of quesadillas you want to cook, increment the integer by one. Each time you do that, cook the quesadilla. Oh, wait. I think we just went careening into the nerdiest kitchen narrative ever. If Gordon Ramsey were in charge, he'd have strangled you with your apron for that. Hmm... how 'bout this?

public void CookQuesadillas(int numberOfQuesadillas)
{
    PrepareIngredients();
    PrepareEquipment();
    PerformActualCooking(numberOfQuesadillas);
    FinishUp();
}

private static void PerformActualCooking(int numberOfQuesadillas)
{
    for (int index = 0; index < numberOfQuesadillas; index++)
    {
        SprinkleOnionsAndCheeseOnTortilla();
        PutTortillaInPan();
        CookUntilFirm();
        FoldTortillaAndCookUntilBrown();
        FlipTortillaAndCookUntilBrown();
        RemoveCookedQuesadilla();
    }
}

Well, I'd say that the CookQuesadillas method looks a lot better, but do we like "PerformActualCooking?" The whole situation is an improvement, but I'm not a huge fan, personally. I'm still mixing control flow with a series of domain concepts. PerformActualCooking is still both a story about for-loops and about cooking. Let's try something else:

public void CookQuesadillas(int numberOfQuesadillas)
{
    PrepareIngredients();
    PrepareEquipment();
    PerformActualCooking(numberOfQuesadillas);
    FinishUp();
}

private static void PerformActualCooking(int numberOfQuesadillas)
{
    for (int index = 0; index < numberOfQuesadillas; index++)
        CookAQuesadilla();
}

private static void CookAQuesadilla()
{
    SprinkleOnionsAndCheeseOnTortilla();
    PutTortillaInPan();
    CookUntilFirm();
    FoldTortillaAndCookUntilBrown();
    FlipTortillaAndCookUntilBrown();
    RemoveCookedQuesadilla();
}

We've added a node to the tree that some might say is one too many, but I disagree. What I like is the fact that we have two methods that contain nothing but abstractions about the domain knowledge of cooking and we have a bridging method that brings in the detailed realities of the programming language. We're isolating things like looping, counting, conditionals, etc. from the actual problem solving and story telling that we want to do here. So when you have a method that does a few things and you think about adding some kind of control flow to it, remember that you're introducing a detail to the method that is at a lower level of abstraction and should probably have its own node in the tree.

Adrift in a Sea of Tiny Methods

If you're looking at this cooking example, it probably strikes you that there are no fewer than eighteen methods in this class, not counting any additional sub-methods or elided properties (which are really just methods in C# anyway). That's a lot for a class, and you may think that I'm encouraging you to write classes with dozens of methods. That isn't the case. So far what we've done is started to create trees of many small methods with a public method and then a ton of private methods, which is a code smell called "Iceberg Class." What's the cure for iceberg classes? Extracting classes from them. Maybe you turn the first two methods that prepare ingredients and equipment into a "Preparer" class with two public methods, "PrepareIngredients" and "PrepareEquipment." Or maybe you extract a quesadilla cooking class.

It's really going to vary based on your situation, but the point is that you take this opportunity pick nodes in your growing tree of methods and sub-methods and convert them into roots by turning them into classes. And if doing this leads you to having what seems to be too many classes in your namespace? Create more namespaces. Too many of those in a module? Create more modules. Too many modules/projects in a solution? More solutions.

Here's the thing: the complexity exists no matter how many or few methods/classes/namespaces/modules/solutions you have. Slamming them all into monolithic constructs together doesn't eliminate or even hide that complexity, though many seem to take the ostrich approach and pretend that it does. Your code isn't somehow 'simpler' because you have one solution with one project that has ten classes, each with 300 methods of 7,000 lines. Sure, things look simple when you fire up the IDE, but they sure won't be simple when you try to debug. In fact, they'll be much more complicated because your functionality will be hopelessly interwoven with weird temporal couplings, ad-hoc references, and hidden dependencies.

If you create large trees of functionality, you have the luxury of making the structure of the tree the representative of the application's complexity, with each node an island of simplicity. It is in these node-methods that the business logic takes place and that getting things right is most important. And by managing your abstractions, you keep these nodes easy to reason about. If you structure the tree correctly and follow good OOP design and practice, you'll find that even the structure of the tree is not especially complicated since each node provides a good representative abstraction for its sub-tree.

Having small, readable, self-documenting methods is no pipe dream. Really, with a bit of practice, it's not even very hard. It just requires you to see code a little bit differently. See it as a series of hierarchical stories and abstractions rather than as a bunch of loops, counters, pointers, and control flow statements, and the people that maintain what you write, including yourself, will thank you for it.

By

API Lost Opportunity: A Case Study

I’m aware that recently there’s been some brouhaha over criticism/trashing of open-source code and the Software Craftsmanship movement, and I’d like to continue my longstanding tradition of learning what I can from things without involving myself in any drama. Toward that end, I’d like to critique without criticizing, or, to put it another way, I’d like to couch some issues I have scan0001with an API as things that could be better rather than going on some kind of rant about what I don’t like. I’m hoping that I can turn the mild frustration I’m having as I use an API into being a better API writer myself. Perhaps you can pick up a few tidbits from this as well.

Recently I’ve been writing some code that consumes the Microsoft CRM API using its SDK. To start off with the good, I’d say that API is fairly powerful and flexible–it gives you access to the guts of what’s in the CRM system without making you write SQL or do any other specific persistence modeling. Another impressive facet is that it can handle the addition of arbitrary schema to the database without any recompiling of client code. This is done, in essence, by using generalized table-like objects in memory to model actual tables. Properties are handled as a collection of key-value string pairs, effectively implementing a dynamic-typing-like scheme. A final piece of niceness is that the API revolves around a service interface which makes it a lot nicer than many things for mocking when you test.

(If you so choose, you can use a code generator to hit your CRM install and spit out strongly-typed objects, but this is an extra feature about which it’s not trivial to find examples or information. It also spits out a file that’s a whopping 100,000+ lines, so caveat emptor.)

But that flexibility comes with a price, and that price is a rather cumbersome API. Here are a couple of places where I believe the API fails to some degree or another and where I think improvement would be possible, even as the flexible scheme and other good points are preserved.

Upside-Down Polymorphism

The API that I’m working with resides around a series of calls with the following basic paradigm:

VeryAbstractBaseRequest request = new SomeActualConcreteRequest();
VeryAbstractBaseResponse response = service.Execute(request);

There is a service that accepts very generic requests as arguments and returns very generic responses as return values. The first component of this is nice (although it makes argument capture during mocking suck, but that’s hardly a serious gripe). You can execute any number of queries and presumably even create your own, provided they supply what the base wants and provided there isn’t some giant switch statement on the other side downcasting these queries (and I think there might be, based on the design).

The second component of the paradigm is, frankly, just awful. The “VeryAbstractBaseResponse” provides absolutely no useful functionality, so you have to downcast it to something actually useful in order to evaluate the response. This is not big-boy polymorphism, and it’s not even adequate basic design. In order to get polymorphism right, the base type would need to be loaded up with methods so that you never need to downcast and can still get all of the child functionality out of it. In order not to foist a bad design on clients, it has to dispense with needing to base your response type on the request type. Here’s what I mean:

VeryAbstractBaseRequest request = new GetSomeCookiesRequest();
VeryAbstractBaseResponse response = service.Execute(request);
var meaningfulResponse = (GetSomeCookiesResponse)response; //Succeeds only because I have inappropriate knowledge of an implied coupling
var secondmeaningfulResponse = (GetSomeBrowniesResponse)response; //Bombs out because apparently brownies and cookies aren't fungible
var thirdMeaningfulResponse = (GetSomeBrownieBytesResponse)response; //Succeeds oddly because apparently BrownieBytes are children of Coookies (or a conversion operator exists)

Think about what this means. In order to get a meaningful response, I need to keep track of the type of request that I created. Then I have to have a knowledge of what responses match what requests on their side, which is perhaps available in some horrible document somewhere or perhaps is only knowable through trial/error/guess. This system is a missed opportunity for polymorphism even as it pretends otherwise. Polymorphism means that I can ask for something and not care about exactly what kind of something I get back. This scheme perverts that–I ask for something, and not only must I care what kind of something it is, but I also have to guess wildly or fail with runtime exceptions.

The lesson?

Do consumers of your API a favor. Be as general as possible when accepting arguments and as specific as possible when returning them. Think of it this way to help remember. If you have a method that takes type “object” and can actually do something meaningful with it, that method is going to be incredibly useful to anyone and everyone. If you have a method that returns type object, it’s going to be pretty much useless to anyone who doesn’t know how the method works internally. Why? Because they’re going to have to have an inappropriate knowledge of your workings in order to know what this thing is they’re getting back, cast it appropriately, and use it.

Hierarchical Data Transfer Objects

With this interface, the return value polymorphism issues might be livable if not for the fact that these request/response objects are also reminiscent of the worst in OOP, reminiscent of late ’90s, nesting-happy Java. What does a response consist of? Well, a response has a collection of entities and a collection of results, so it’s not immediately clear which path to travel, but if you guess right and query for entities, you’re just getting started. You see, entities isn’t an enumeration–it’s something called “EntityCollection” which brings just about nothing to the table except to provide the name of the entities. Ha ha–you probably wanted the actual entities, you fool. Well, you have to call response.EntityCollection.Entities to get that. And that’s an enumeration, but buried under some custom type called DataCollection whose purpose isn’t immediately clear, but it does implemented IEnumerable so now you can get at your entity with Linq. So, this gets you an entity:

RetrieveMultipleResponse response = new RetrieveMultipleResponse();
Entity blah = response.EntityCollection.Entities.First();

But you’re quite naieve if you thought that would get you anything useful. If you want to know about a property on the entity, you need to look in its Attributes collection which, in a bit of deja vu, is an AttributeCollection. Luckily, this one doesn’t make you pick through it because it inherits from this DataCollection thing, meaning that you can actually get a value by doing this:

RetrieveMultipleResponse response = new RetrieveMultipleResponse();
object blah = response.EntityCollection.Entities.First().Attributes["nameOfMyAttribute"];

D’oh! It’s an object. So after getting your value from the end of that train wreck, you still have to cast it.

The lesson?

Don’t force Law of Demeter violations on your clients. Provide us with a way of getting what we want without picking our way through some throwback, soul-crushing object hierarchy. Would you mail someone a package containing a series of Russian nesting dolls that, when you got to the middle, had only a key to a locker somewhere that the person would have to call and ask you about? The answer is, “only if you hated that person.” So please, don’t do this to consumers of your API. We’re human beings!

Unit Tests and Writing APIs

Not to bang the drum incessantly, but I can’t help but speculate that the authors of this API do not write tests, much less practice TDD. A lot of the pain that clients will endure is brought out immediately when you start trying to test against this API. The tests are quickly littered with casts, as operators, and setup ceremony for building objects with a nesting structure four or five references deep. If I’m writing code, that sort of pain in tests is a signal to me that my design is starting to suck and that I should think of my clients.

I’ll close by saying that it’s all well and good to remember the lessons of this post and probably dozens of others besides, but there’s no better way to evaluate your API than to become a client, which is best done with unit tests. On a micro-level, you’re dog-fooding when you do that. It then becomes easy to figure out if you’re inflicting pain on others, since you’ll be the first to feel it.

By

Switch Statements are Like Ants

Switch statements are often (and rightfully, in my opinion) considered to be a code smell. A code smell, if you’ll recall, is a superficial characteristic of code that is often indicative of deeper problems. It’s similar in concept to the term “red flag” for interpersonal relationships. A code smell is like someone you’ve just met asking you to help them move and then getting really angry when you don’t agree to do it. This behavior is not necessarily indicative of deep-seated psychological problems, but it frequently is.

Consequently, the notion that switch statements are a code smell indicates that if you see switch statements in code, there’s a pretty good chance that design tradeoffs with decidedly negative consequences have been made. The reason I say this is that switch statements are often used to simulate polymorphism for those not comfortable with it:

public void Attack(Animal animal)
{
    switch (animal.Type)
    {
        case AnimalType.Cat: 
            Console.WriteLine("Scratch"); 
            break;
        case AnimalType.Dog: 
            Console.WriteLine("Bite"); 
            break;
        case AnimalType.Wildebeest: 
            Console.WriteLine("Headbutt"); 
            break;
        case AnimalType.Landshark: 
            Console.WriteLine("Running-bite");
            break;
        case AnimalType.Manticore: 
            Console.WriteLine("Eat people");
            break;
    }
}

Clearly a better design from an OOP perspective would be an Animal base class/interface and an Attack() method on that, overridable by children/implementers. This design has the advantage of requiring less code, scaling better, and conforming to the Open/Closed principle–if you want to add some other animal later, you just add a new class to the code base and probably tweak a factory method somewhere and you’re done.

This method isn’t really that bad, though, compared to how bad it could be. The design is decidedly procedural, but its consequences aren’t far reaching. From an execution perspective, the switch statement is hidden as an implementation detail. If you were to isolate the console (or wrap and inject it), you could unit test this method pretty easily. It’s ugly and it’s a smell, but it’s sort of like seeing an ant crawling in your house–a little icky and potentially indicative of an infestation, but sometimes life deals you lemons.

But what about this:

public string Attack(Animal animal)
{
    switch (animal.Type)
    {
        case AnimalType.Cat:
            return GetAttackFromCatModule();
        case AnimalType.Dog:
            return GetAttackFromDogModule();
        case AnimalType.Wildebeest:
            return GetAttackFromWildebeestModule();
        case AnimalType.Landshark:
            return GetAttackFromLandsharkModule();
        case AnimalType.Manticore:
            return GetAttackFromManticoreModule();
    }
}

Taking the code at face value, this method figures out what the animal’s attack is and returns it, but it does so by invoking a different module for each potential case. As a client of this code, your path of execution can dive into any of five different libraries (and more if this ‘pattern’ is followed for future animals). The fanout here is out of control. Imagine trying to unit test SavageAntthis method or isolate it somehow. Imagine if you need to change something about the logic here. The violence to the code base is profound–you’d be changing execution flow at the module level.

If the first switch state was like an ant in your code house, this one is like an ant with telephone poles for legs, carving a swath of destruction. As with that happening in your house, the best short-term strategy is scrambling to avoid this code and the best long-term strategy is to move to a new application that isn’t a disaster zone.

Please be careful with switch statements that you use. Think of them as ants crawling through your code–ants whose legs can be tiny ant legs or giant tree trunk legs. If they have giant tree trunk legs, then you’d better make sure they’re the entirety of your application–that the “ant” is the brains ala an Ioc container–because those massive swinging legs will level anything that isn’t part of the ant. If they swing tiny legs, then the damage only occurs when they come in droves and infest the application. But either way, it’s helpful to think of switch statements as ants (or millipedes, depending on the number of cases) because this forces you to think of their execution paths as tendrils fanning out through your application and creating couplings and dependencies.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

How to Create Good Abstractions: Oracles and Magic Boxes

Oracles In Math

When I was a freshman in college, I took a class called “Great Theoretical Ideas In Computer Science”, taught by Professor Steven Rudich. It was in this class that I began to understand how fundamentally interwoven math and computer science are and to think of programming as a logical extension of math. We learned about concepts related to Game Theory, Cryptography, Logic Gates and P, NP, NP-Hard and NP-Complete problems. It is this last set that inspires today’s post.

This is not one of my “Practical Math” posts, so I will leave a more detailed description of these problems for another time, but I was introduced to a fascinating concept here: that you can make powerful statements about solutions to problems without actually having a solution to the problem. The mechanism for this was an incredibly cool-sounding concept called “The Problem Solving Oracle”. In the world of P and NP, we could make statements like “If I had a problem solving oracle that could solve problem X, then I know I could solve problem Y in polynomial (order n squared, n cubed, etc) time.” Don’t worry if you don’t understand the timing and particulars of the oracle — that doesn’t matter here. The important concept is “I don’t know how to solve X, but if I had a machine that could solve it, then I know some things about the solutions to these other problems.”

It was a powerful concept that intrigued me at the time, but more with grand visions of fast factoring and solving other famous math problems and making some kind of name for myself in the field of computer science related mathematics. Obviously, you’re not reading the blog of Fields Medal winning mathematician, Erik Dietrich, so my reach might have exceeded my grasp there. However, concepts like this don’t really fall out of my head so much as kind of marinate and lie dormant, to be re-appropriated for some other use.

Magic Boxes: Oracles For Programmers

One of the most important things that we do as developers and especially as designers/architects is to manage and isolate complexity. This is done by a variety of means, many of which have impressive-sounding terms like “loosely coupled”, “inverted control”, and “layered architecture”. But at their core, all of these approaches and architectural concepts have a basic underlying purpose: breaking problems apart into isolated and tackle-able smaller problems. To think of this another way, good architecture is about saying “assume that everything else in the world does its job perfectly — what’s necessary for your little corner of the world to do the same?”

That is why the Single Responsibility Principle is one of the most oft-cited principles in software design. This principle, in a way, says “divide up your problems until they’re as small as they can be without being non-trivial”. But it also implies “and just assume that everyone else is handling their business.”

Consider this recent post by John Sonmez, where he discusses deconstructing code into algorithms and coordinators (things responsible for getting the algorithms their inputs, more or less). As an example, he takes a “Calculator” class where the algorithms of calculation are mixed up with the particulars of storage and separates these concerns. This separates the very independent calculation algorithm from the coordinating storage details (which are of no consequence to calculations anyway) in support of his point, but it also provides the more general service of dividing up the problem at hand into smaller segments that take one another granted.

Another way to think of this is that his “Calculator_Mockless” class has two (easily testable) dependencies that it can trust to do their jobs. Going back to my undergrad days, Calculator_Mockless has two Oracles: one that performs calculations and the other that stores stuff. How do these things do their work? Calculator_Mockless doesn’t know or care about that; it just provides useful progress and feedback under the assumption that they do. This is certainly reminiscent of the criteria for “Oracle”, an assumption of functionality that allows further reasoning. However, “Oracle” has sort of a theoretical connotation in the math sense that I don’t intend to convey, so I’m going to adopt a new term for this concept in programming: “Magic Box”. John’s Calculator_Mockless says “I have two magic boxes — one for performing calculations and one for storing histories — and given these boxes that do those things, here’s how I’m going to proceed.”

How Spaghetti Code Is Born

It’s one thing to recognize the construction of Magic Boxes in the code, but how do you go about building and using them? Or, better yet, go about thinking in terms of them from the get-go? It’s a fairly sophisticated and not always intuitive method for deconstructing programming problems.

nullTo see what I mean, think of being assigned a hypothetical task to read an XML file full of names (right), remove any entries missing information, alphabetize the list by last name, and print out “First Last” with “President” pre-pended onto the string. So, for the picture here, the first line of the output should be “President Grover Cleveland”. You’ve got your assignment, now, quick, go – start picturing the code in your head!

What did you picture? What did you say to yourself? Was it something like “Well, I’d read the file in using the XDoc API and I’d probably use an IList<> implementer instead of IEnumerable<> to store these things since that makes sorting easier, and I’d probably do a foreach loop for the person in people in the document and while I was doing that write to the list I’ve created, and then it’d probably be better to check for the name attributes in advance than taking exceptions because that’d be more efficient and speaking of efficiency, it’d probably be best to append the president as I was reading them in rather than iterating through the whole loop again afterward, but then again we have to iterate through again to write them to the console since we don’t know where in the list it will fall in the first pass, but that’s fine since it’s still linear time in the file size, and…”

And slow down there, Sparky! You’re making my head hurt. Let’s back up a minute and count how many magic boxes we’ve built so far. I’m thinking zero — what do you think? Zero too? Good. So, what did we do there instead? Well, we embarked on kind of a willy-nilly, scattershot, and most importantly, procedural approach to this problem. We thought only in terms of the basic sequence of runtime operations and thus the concepts kind of all got jumbled together. We were thinking about exception handling while thinking about console output. We were thinking about file reading while thinking about sorting strings. We were thinking about runtime optimization while we were thinking about the XDocument API. We were thinking about the problem as a monolithic mass of all of its smaller components and thus about to get started laying the bedrock for some seriously weirdly coupled spaghetti code.

Cut that Spaghetti and Hide It in a Magic Box

Instead of doing that, let’s take a deep breath and consider what mini-problems exist in this little assignment. There’s the issue of reading files to find people. There’s the issue of sorting people. There’s the issue of pre-pending text onto the names of people. There’s the issue of writing people to console output. There’s the issue of modeling the people (a series of string tuples, a series of dynamic types, a series of anonymous types, a series of structs, etc?). Notice that we’re not solving anything — just stating problems. Also notice that we’re not talking at all about exception handling, O-notation or runtime optimization. We already have some problems to solve that are actually in the direct path of solving our main problem without inventing solutions to problems that no one yet has.

So what to tackle first? Well, since every problem we’ve mentioned has the word “people” in it and the “people” problem makes no mention of anything else, we probably ought to consider defining that concept first (reasoning this way will tell you what the most abstract concepts in your code base are — the ones that other things depend on while depending on nothing themselves). Let’s do that (TDD process and artifacts that I would normally use elided):

public struct Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }

    public override string ToString()
    {
        return string.Format("{0} {1}", FirstName, LastName);
    }
}

Well, that was pretty easy. So, what’s next? Well, the context of our mini-application involves getting people objects, doing things to them, and then outputting them. Chronologically, it would make the most sense to figure out how to do the file reads at this point. But “chronologically” is another word for “procedurally” and we want to build abstractions that we assemble like building blocks rather than steps in a recipe. Another, perhaps more advantageous way would be to tackle the next simplest task or to let the business decide which functionality should come next (please note, I’m not saying that there would be anything wrong with implementing the file I/O at this point — only that the rationale “it’s what happens first sequentially in the running code” is not a good rationale).

Let’s go with the more “agile” approach and assume that the users want a stripped down, minimal set of functionality as quickly as possible. This means that you’re going to create a full skeleton of the application and do your best to avoid throw-away code. The thing of paramount importance is having something to deliver to your users, so you’re going to want to focus mainly on displaying something to the user’s medium of choice: the console. So what to do? Well, imagine that you have a magic box that generates people for you and another one that somehow gets you those people. What would you do with them? How ’bout this:

public class PersonConsoleWriter
{
    public void WritePeople(IEnumerable people)
    {
        foreach (var person in people)
            Console.Write(person);
    }
}

Where does that enumeration come from? Well, that’s a problem to deal with once we’ve handled the console writing problem, which is our top priority. If we had a demo in 30 seconds, we could simply write a main that instantiated a “PersonConsoleWriter” and fed it some dummy data. Here’s a dead simple application that prints, well, nothing, but it’s functional from an output perspective:

public class PersonProcessor
{
    public static void Main(string[] args)
    {
        var writer = new PersonConsoleWriter();
        writer.WritePeople(Enumerable.Empty());
    }
}

What to do next? Well, we probably want to give this thing some actual people to shove through the pipeline or our customers won’t be very impressed. We could new up some people inline, but that’s not very impressive or helpful. Instead, let’s assume that we have a magic box that will fetch people out of the ether for us. Where do they come from? Who knows, and who cares — not main’s problem. Take a look:

public class PersonStorageReader
{
    public IList GetPeople()
    {
        throw new NotImplementedException();
    }
}

Alright — that’s pretty magic box-ish. The only way it could be more so is if we just defined an interface, but that’s overkill for the moment and I’m following YAGNI. We can add an interface if thing later needs to pull customers out of a web service or something. At this point, if we had to do a demo, we could simply return the empty enumeration instead of throwing an exception or we could dummy up some people for the demo here. And the important thing to note is that now the thing that’s supposed to be generating people is the thing that’s generating the people — we just have to sort out the correct internal implementation later. Let’s take a look at main:

public static void Main(string[] args)
{
    var reader = new PersonStorageReader();
    var people = reader.GetPeople();

    var writer = new PersonConsoleWriter();
    writer.WritePeople(people);
}

Well, that’s starting to look pretty good. We get people from the people reader and feed them to the people console writer. At this point, it becomes pretty trivial to add sorting:

public static void Main(string[] args)
{
    var reader = new PersonStorageReader();
    var people = reader.GetPeople().OrderBy(p => p.LastName);

    var writer = new PersonConsoleWriter();
    writer.WritePeople(people);
}

But, if we were so inclined, we could easily look at main and say “I want a magic box that I hand a person collection to and it gives me back a sorted person collection, and that could be stubbed out as follows:

public static void Main(string[] args)
{
    var reader = new PersonStorageReader();
    var people = reader.GetPeople();

    var sorter = new PersonSorter();
    var sortedPeople = sorter.SortList(people);

    var writer = new PersonConsoleWriter();
    writer.WritePeople(people);
}

The same kind of logic could also be applied for the “pre-pend the word ‘President'” requirement. That could pretty trivially go into the console writer or you could abstract it out. So, what about the file logic? I’m not going to bother with it in this post, and do you know why? You’ve created enough magic boxes here — decoupled the program enough — that it practically writes itself. You use an XDocument to pop all Person nodes and read their attributes into First and Last name, skipping any nodes that don’t have both. With Linq to XML, how many lines of code do you think you need? Even without shortcuts, the jump from our stubbed implementation to the real one is small. And that’s the power of this approach — deconstruct problems using magic boxes until they’re all pretty simple.

Also notice another interesting benefit which is that the problems of runtime optimization and exception handling now become easier to sort out. The exception handling and expensive file read operations can be isolated to a single class and console writes, sorting, and other business processing need not be affected. You’re able to isolate difficult problems to have a smaller surface area in the code and to be handled as requirements and situation dictate rather than polluting the entire codebase with them.

Pulling Back to the General Case

I have studiously avoided discussion of how tests or TDD would factor into all of this, but if you’re at all familiar with testable code, it should be quite apparent that this methodology will result in testable components (or endpoints of the system, like file readers and console writers). There is a deep parallel between building magic boxes and writing testsable code — so much so that “build magic boxes” is great advice for how to make your code testable. The only leap from simply building boxes to writing testable classes is to remember to demand your dependencies in constructors, methods and setters, rather than instantiating them or going out and finding them. So if PersonStorageReader uses an XDocument to do its thing, pass that XDocument into its constructor.

But this concept of magic boxes runs deeper than maintainable code, TDD, or any of the other specific clean coding/design principles. It’s really about chunking problems into manageable bites. If you’re even writing code in a method and you find yourself thinking “ok, first I’ll do X, and then-” STOP! Don’t do anything yet! Instead, first ask yourself “if I had a magic box that did X so I didn’t have to worry about it here and now, what would that look like?” You don’t necessarily need to abstract every possible detail out of every possible place, but the exercise of simply considering it is beneficial. I promise. It will help you start viewing code elements as solvers of different problems and collaborators in a broader solution, instead of methodical, plodding steps in a gigantic recipe. It will also help you practice writing tighter, more discoverable and usable APIs since your first conception of them would be “what would I most like to be a client of right here?”

So do the lazy and idealistic thing – imagine that you have a magic box that will solve all of your problems for you. The leap from “magic box” to “collaborating interface” to “implemented functionality” is much simpler and faster than it seems when you’re isolating your problems. The alternative is to have a system that is one gigantic, procedural “magic box” of spaghetti and the “magic” part is that it works at all.