DaedTech

Stories about Software

By

C# Tips for Compacting Code

This is a little series of things that I’ve picked up over the course of time and use in situations to make my code more compact, and, in my opinion readable. Your mileage may vary on liking any or all of these, but I figured I’d share them.

To me, compacting code is very important. When I look at a method or a series of methods, I like to be able to tell what they do at a glance and drill into them only if it’s important to me. I think I naturally perceive code the way I do an outline in a Word document or something. I look on the left for brief, salient points, and then in the smaller text that goes further right only if I want to fill in the details. This allows me to variable skim over or delve into methods.

If methods are very vertically verbose, I lose this perspective. If I have two or even one method taking up all of the real estate in my IDE vertically, I don’t know what’s going on in the class because I’m lost in some confusing method that forces me to think about too many things at once. I don’t ever use that little drop down in Visual Studio with the alphabetized method list for navigation. If I have to use that to navigate, I consider the class a cohesion disaster.

So, given this line of thought, here are some little tips I have for making methods and code in general more compact without sacrificing (in my opinion) readability.

Null coalescing in foreach

Consider the following method:

public virtual void BindCommands(params ICommand[] commands)
{
    if (commands == null)
        return;

    foreach (var myCommand in commands)
         BindCommand(myCommand);
}

We’re going to take a collection of commands and iterate over them in this method, invoking an individual method to do the individual dirty work. So, the first thing that we do is guard against null so we’re not tripping over an exception. We could throw an exception on null argument, which might be preferable depending on context, but let’s forget about that possibility and assume that failing quietly is actually what we want here. Once we’ve finished with the early return bit, we do the actual, meaningful work of the method.

Let’s compact that a bit:

public virtual void BindCommands(params ICommand[] commands)
{
    var myCommands = commands ?? new ICommand[] { };

    foreach (var myCommand in myCommands)
       BindCommand(myCommand);
}

Now, we’re using the null object pattern and null coalescing operator to take care of the null handling, instead of an early return with an ugly guard condition. We can compact this even more, if so desired:

public virtual void BindCommands(params ICommand[] commands)
{
    foreach (var myCommand in commands ?? new ICommand[] { })
        BindCommand(myCommand);
}

Now, we’ve eliminated the extra code altogether and gotten this method down to its real meat and potatoes — iterating over the collection of commands. The fact that a corner case in which this collection might be null exists is an annoying detail, and we’ve relegated it to such status by not devoting 50% of the method’s real estate to handling it. The syntax here may look a little funny at first if you aren’t used to it, but it’s not double-take inducing. We iterate over a collection and do something. The target of our iteration looks a little more involved, but in my opinion, this is a small price to pay for compacting the method and not devoting half of the method to a corner case.

Using params

Speaking of params, let’s use params! In the method above, let’s consider the code that I was replacing:

private void BindCommand(ICommand command)
{
    if (command != null && _gesture != null)
        _window.InputBindings.Add(new InputBinding(command, _gesture));
}

Client code of this then looks like:

private void SomeClient()
{
    var myBinder = new Binder(SomeWindow, SomeKeyGesture);
    myBinder.BindCommand(SomeCommand1);
    myBinder.BindCommand(SomeCommand2);
    myBinder.BindCommand(SomeCommand3);
   //etc
}

As an aside, ignore the fact that it’s obtuse to bind a bunch of different commands to the same window and key gesture. In the actual code, there’s a lot more going on than I’m displaying here, and I’m trying to include nothing that will distract from my points. If you look at the params version above, consider what the client code of that looks like:

private void SomeClient()
{
    var myBinder = new Binder(SomeWindow, SomeKeyGesture);
    myBinder.BindCommands(SomeCommand1, SomeCommand2, SomeCommand3);
}

Now, I personally have a strong preference for that. A bunch of lines of the same thing over and over again drives me batty, even if the things in question need to be parameterized somewhere and this is the place it has to happen. There just seems something incredibly vacuous about code like the pre-example and I always favor more vertically compact code because I can process more of the details. By SomeCommand12, I’ve probably figured out what’s going on even on my slowest day — I don’t need another 50 lines besides. If we have to do things like this, let’s at least condense them so they take up as little mindshare in a method as possible.

Optional Parameters

If you haven’t gotten on board this train since C# 4.0, I’d say it’s time. If you have a bunch of code like this:

public void Method1()
{
    Method4(null, null, null);
}

public void Method2(string arg1)
{
    Method4(arg1, null, null);
}

public void Method3(string arg1, string arg2)
{
    Method4(arg1, arg2, null);
}

public void Method4(string arg1, string arg2, string arg3)
{
    Arg1 = arg1;
    Arg2 = arg2;
    Arg3 = arg3;
}

… it’s time to turn it into this:

public void TheOnlyMethod(string arg1 = null, string arg2 = null, string arg3 = null)
{
    Arg1 = arg1;
    Arg2 = arg2;
    Arg3 = arg3;
}

 

Omit brackets when you have a single line following a branch or loop condition

I used to be a stickler for this:

if(child.SpareRod())
{
    child.Spoil();
}

I reasoned that omitting the brackets was just begging for downstream maintenance problems, and that’d I’d be a good citizen, not taking shortcuts. I persisted in that way of doing things until quite recently. I was watching an Uncle Bob video where he said in passing, “I think that there should only be one line of code after an if or else or for, and I’m not going to make it easier on anyone that comes along to mess that up.” (can’t find the video, so I’m paraphrasing).

I blew this off at first, but for some reason, it stuck in my head and would occur to me from time to time. Finally, one day, I simply had a 180. I was now continuing to do it out of stubbornness, I realized, having long since decided Bob was right while barely realizing it. This practice made my code more compact, and it encouraged me to factor everything following a control flow statement into its own method, leading to much more readable code. I think the bigger benefit comes from the practice of “1 line per control flow statement” than the two lines you save from omitting the brackets, but nevertheless, both are adding up to create methods that are much more compact:

if(child.SpareRod())
    child.Spoil();

So, I’m out of tips for the day. If people like this, let me know, and perhaps I’ll put together another little post with a few more compactness tips, though it might take me longer to think of ones. These were low hanging fruit that I find myself doing often.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

Abstractions are Important 3 – “What?” Before “How?”

So, I’m back from my two weeks overseas, refreshed, enriched, and generally wiser, I suppose. We traveled to Spain and Portugal, visiting a ton of historic sites, eating good food and having fun. For my first post back, I’d like to make a third post in my series on abstractions.

Methods as Recipes

I was looking at some code yesterday. It was some long method, probably 60 or 70 lines long, and I sighed as scrolled disinterestedly through it. At the moment, I couldn’t muster the energy to try to figure out what the author thought it did or probably wanted it to do, so I started ruminating about what leads to methods like this. And, I think I understand it. It’s the idea of a method as a recipe.

By way of homage to Spain, let’s write a “CookPaella()” method. When writing methods, do you ever start by doing the following?

public void CookPaella()
{
    //1.  Get a medium bowl
    //2.  Mix together 2 tablespoons olive oil, tsp paprika, , tsp oregano, salt and pepper to taste
    //3.  Stir in 2 pounds chicken breasts cut into 2 inch pieces, to coat
    //4.  Refrigerate chicken.
    //5.  Heat 2 tablespoons olive oil in paella pan over medium heat
    //6.  Stir in 3 cloves garlic, 1 tsp red pepper flakes, and 2 cups rice
    //7.  Cook to coat rice with oil -- about 3 minutes
    //8.  Stir in a pinch of saffron, 1 bay leaf, 1/2 bunch parsley, 1 quart chicken stock and optional zest of two lemons
    //9.  Bring to a boil, then cover, reduce heat to medium low and simmer 20 minutes
    //10. Meanwhile, heat 2 tbsps olive oil in a separate skillet over medium heat and stir in marinated chicken with onions and cook for 5 minutes.
    //11. Stir in bell pepper and sausage and cook for 5 minutes.
    //12. Stir in shrimp, turning until both sides are pink
    //13. Spread rice mixture onto a serving tray, topping with meat and seafood mixture
}

(recipe compliments of All Recipes).

This is a sane approach. Much like making an outline for an essay in English class, you list out the basic procedure that you want to follow, and you fill in the details:

public void CookPaella()
{
    //1.  Get a medium bowl
    var myBowl = new Bowl("Medium");
    //2.  Mix together 2 tablespoons olive oil, tsp paprika, , tsp oregano, salt and pepper to taste
    myBowl.AddTablespoons(2, "olive oil");
    myBowl.AddTeaspoons(1, "paprika");
    myBowl.AddTeaspoons(1, "oregano");
    myBowl.AddPinch("salt");
    myBowl.AddPinch("pepper");

    //3.  Stir in 2 pounds chicken breasts cut into 2 inch pieces, to coat
    //4.  Refrigerate chicken.
    //5.  Heat 2 tablespoons olive oil in paella pan over medium heat
    //6.  Stir in 3 cloves garlic, 1 tsp red pepper flakes, and 2 cups rice
    //7.  Cook to coat rice with oil -- about 3 minutes
    //8.  Stir in a pinch of saffron, 1 bay leaf, 1/2 bunch parsley, 1 quart chicken stock and optional zest of two lemons
    //9.  Bring to a boil, then cover, reduce heat to medium low and simmer 20 minutes
    //10. Meanwhile, heat 2 tbsps olive oil in a separate skillet over medium heat and stir in marinated chicken with onions and cook for 5 minutes.
    //11. Stir in bell pepper and sausage and cook for 5 minutes.
    //12. Stir in shrimp, turning until both sides are pink
    //13. Spread rice mixture onto a serving try, topping with meat and seafood mixture
}

This is great because rather than starting without any kind of gameplan, we’ve stubbed everything out that needs to happen, and now we’re in the process of filling in the template. Whether you’re cooking or assembling a piece of furniture or anything else, there is a tendency to read through (or skip) to the end so that the actual following of the instructions reveals no mysteries. We take the same approach here.

Once this is complete, you’re going to have a large method, so some refactoring is probably in order. At the very least, our numbered bullets provide some logical methods to create:

public void CookPaella()
{
    //1.  Get a medium bowl
    var myBowl = new Bowl("Medium");
    //2.  Mix together 2 tablespoons olive oil, tsp paprika, , tsp oregano, salt and pepper to taste
    MixTogether(myBowl);

    //3.  Stir in 2 pounds chicken breasts cut into 2 inch pieces, to coat
    //4.  Refrigerate chicken.
    //5.  Heat 2 tablespoons olive oil in paella pan over medium heat
    //6.  Stir in 3 cloves garlic, 1 tsp red pepper flakes, and 2 cups rice
    //7.  Cook to coat rice with oil -- about 3 minutes
    //8.  Stir in a pinch of saffron, 1 bay leaf, 1/2 bunch parsley, 1 quart chicken stock and optional zest of two lemons
    //9.  Bring to a boil, then cover, reduce heat to medium low and simmer 20 minutes
    //10. Meanwhile, heat 2 tbsps olive oil in a separate skillet over medium heat and stir in marinated chicken with onions and cook for 5 minutes.
    //11. Stir in bell pepper and sausage and cook for 5 minutes.
    //12. Stir in shrimp, turning until both sides are pink
    //13. Spread rice mixture onto a serving try, topping with meat and seafood mixture
}

private static void MixTogether(Bowl myBowl)
{
    myBowl.AddTablespoons(2, "olive oil");
    myBowl.AddTeaspoons(1, "paprika");
    myBowl.AddTeaspoons(1, "oregano");
    myBowl.AddPinch("salt");
    myBowl.AddPinch("pepper");
}

There, look at that. We’re going to have this reduced to a nice 13 line method and, we could probably even group the calls further from there, resulting in a CookPaella() method that had three instructions: Prep(), CookRice(), CookMeat(). Those methods would consist of three or four lines themselves, and things would spread on out from there like a tree. This is a series of well factored methods that are probably clean and reasonable (discounting the fact that we’re instantiating what we need with Bowl, rather than having variables passed in — that’s for example purposes and not a weakness of the approach).

So where do these long methods come from? Well, I would argue that they come from people thinking in terms of recipe, but not creating or factoring to the outline. That is, they sit down to write the method and simply start banging out code line by line until they’re done. They think in terms of a recipe — a procedure — and methods are simply containers for procedures. So, when they start out, they don’t know what all a method is going to do; rather, the method twists and winds and meanders its way toward some eventual end in ad-hoc fashion.

In a less contrived case than this one, a method will probably start out as just a jump point for a series of instructions. The instructions are coded sequentially until there are no more instructions and then the method is at an end. The “how” is defined, and then the author looks at the “how” and decides what to name the method. He describes “how” and then, based on “how”, decides “what”. Ah, I see that I’ve assembled a series of instructions that seems to cook a paella, so “CookPaella” is probably a good thing to call this.

Methods as Abstractions

So, is there another way to do this? Absolutely. You can flip the script and decide “what” without worrying about “how”. With this approach, we completely discard the procedural/sequential concept and focus instead on creating meaningful abstractions. Procedural/sequential programming is good for, say, batch scripts, but object oriented programmers need to think in abstractions. I don’t want a specific, blow-by-blow recipe for cooking paella to become the ‘architecture’ of my code. I want to write code that a cook can use to get things done. That’s an important distinction.

Let’s think of our paella cooking a little differently. Let’s just think of it as cooking. In this fashion, we can define implements like pots and pans, ingredients like paprika and meat, and actions, like sear or boil. From there, we can start thinking about an object model. What is a “bowl” and what should it do? A bowl does things other than mix ingredients for paella — it has properties and, depending on the object model, may have some actions. Let’s decide what those properties and actions are without worrying about how they work.

For example, let’s define a bowl, along with concepts called “ingredient” and “quantity”:

public class Ingredient
{
    public Ingredient()
    {
            
    }
}

public class Quantity
{
    public Quantity()
    {
            
    }
}

public interface Bowl
{
    int Diameter { get; set; }

    int Height { get; set; }

    void Add(Ingredient ingredient, Quantity quantity);

    void MixIngredients();
}

Notice that we have a Bowl interface, rather than any kind of implementation. How do these methods work? Who cares. We don’t need to know that to make our paella. From here, we might define a paella pan interface as well, and perhaps various inheritors of ingredient and quantity, depending on their implementations. The point is, we’ll reason about each individual object and how it should behave. After we do enough of this, we can start to create larger constructs, such as some MeatCooker class that takes an arbitrary number of meats and a pan and cooks them. A few classes like this, and before you know it, your CookPaella() method will write itself.

Notice that the “start at the beginning and sequentially do everything” scripting style approach is gone, but so too is the structured, outline approach. With this abstraction-based approach, we’re defining objects with properties and behaviors that make sense in our domain model. This is what allows easy assembly. But, the important thing to understand is we’re defining abstractions rather than procedures. These are going to be easier to reason about, and they’re also going to be more flexible. We can use our meat cooker and pan and bowl to cook lasagnas as well as paellas.

So, the overriding message here is to think sequentially and abstractly when considering how to model a problem in an object oriented language. Don’t think about “how” until the very end. Instead, think about “what”. What objects should you have? What properties should they have? What actions? What interactions do you foresee for them. As you answer these questions, the specifics will become much easier. The longer you defer those specifics, the more flexible the design. You’re using an object oriented language, so leverage that language. Don’t code like you’re batch scripting. Don’t model your application by writing recipes even when you’re modeling, well, cooking a recipe.

By

I Love Debugger

Learning to Love Bugs

My apologies for the titular pun in reference to “I Love Big Brother” of iconic, Orwellian fame, but I couldn’t resist. The other day, I was chatting with some people about the idea of factoring large methods into smaller, more focused ones and one of the people chimed in with an objection that was genuinely new to me.

Specifically, the objection was that giant methods tended to be preferable because it kept the stack trace flat and made it easier to have everything “all in one place” when you were (inevitably) going through the code in the debugger. My first, fleeting thought was to wonder if people really found it that difficult to ctrl-tab between classes, but I quickly realized that this was hardly the important problem here (and really, to each his or her own). The bigger problem, as I explained a moment later, but have thought through in a bit more detail for a blog post now, is that you’re writing code more likely to generate defects so that when you’re tasked with fixing those defects, you feel more comfortable.

This is like a general housing contractor saying, “I prefer to use sand as a building material over wood or brick for houses I build because it’s much easier to work with in the morning after the tide destroys the house each night.”

Winston realized that two equals four and that the only way to prevent bugs is to cause them. Wilson happily declared, “I love Debugger!”.

More Bugs? Prove It!

So, if you’re a connoisseur of strict logic in debating, you’ll notice that I’ve begged the question here with my objection. That is, I ‘proved’ the reasoning fallacious by assuming that larger methods means more bugs and then used that ‘proof’ as evidence that larger methods should be avoided. Well, fear not. A group of researches from Standford did an empirical analysis of OS bugs, and found:

Figure 5 shows that as functions grow bigger, error rates increase for most checkers. For the Null checker, the largest quartile of functions had an average error rate almost twice as high as the smallest quartile, and for the Block checker the error rate wEis about six times higher for larger functions. Function size is often used as a measure of code complexity, so these results confirm our intuition that more complex code is more error-prone.

Some of our most memorable experiences examining error reports were in large, highly complex functions with contorted control flow. The higher error rate for large functions makes a case for decomposition into smaller, more understandable functions.

This finding is not unique, though it nicely captures the issue. During my time in graduate school in a class on advanced topics in software engineering, we did a unit on the relationship between various coding practices and likelihood of bugs. A consistent theme is that as function size grows, number of defects per line of code grows (in other words, the number of defects per function grows faster than the number of lines per function).

So, What Now?

In the end, my response is quite simply this: get used to a more factored and distributed paradigm. Don’t worry about being lost in files and stack traces in the debugger. Why not? Well, because if you follow Uncle Bob Martin’s advice about factoring methods to be 4 or 5 lines, you wind up with methods that descriptively tell you what they’re going to do and do it perfectly. In other words, you don’t need to step into them because they’re too simple and concise for things to go wrong.

In this fashion, your debugging becomes different. You don’t have a pen and paper, a spreadsheet, a stack trace window, and a row after row of “immediates” all to keep track of what on Earth is going on. You set a breakpoint somewhere, and any method calls are innocent until proven guilty. You step over everything until something fishy happens (or until you become a client of some lumbering beast of a method that someone else wrote, which is virtually assured of having defects). This approach is almost universally rejected at first but infectious with time. I know that, as a “no bigger than the screen” guy originally, my initial reaction to the idea of all methods being 4 or 5 lines was “that’s stupid”. But try it sometime and you won’t go back.

Bye, Bye Debugger!

If you combine small factored methods and unit tests (which tend to have a natural synergy), you will find that your debugger skills begin to atrophy. Rather than reasoning about the code at runtime, you reason about it at compile time. And, that’s a powerful and important concept.

Reasoning about code at run time is programming by coincidence, as made famous by one of my favorite programming books. I mean, think about it — if you need the debugger to understand what the state of the code is and what’s going on, what you’re really saying when you build and run is, “I have no idea what this code is going to do by inspecting it, so I need to run the entire application to understand it.” You don’t understand your own code while you’re writing it. That’s a problem!

When you write small, factored methods and generally tested and decoupled code, you don’t have this problem. Take this to its logical conclusion and imagine a method that takes two int parameters and returns an int representing their sum. Do you need to set breakpoints and watches, tag immediate variables and look at a stack trace to know what this method will do? Of course not! You can reason about this method at compile time and take for granted that it will do the right thing at run time. When you write code like this, you’re telling the application how to behave. When you find yourself immersed in the debugger for three quarters of your day, you’re not dictating how the application will behave. Instead, you’re begging it to work as a kind of prayer since it’s pretty much out of your hands what’s going to happen. Don’t buy it? How many times have you been at your desk with a deadline looming saying “please, just work — why won’t you just work!?!”

This isn’t to say that I never use the debugger. But, with a combination of TDD, a continuous testing tool, and small, factored methods, it’s fairly rare. Generally, I just use it when my stuff is integrated with code not done this way. For my own stuff, if I ever do use it, it’s from the entry point of a unit test and not the application.

The cleaner the code that I write, the more my debugger skills atrophy. I watch in amazement at peers that are incredible with the debugger — and I say that with no irony. Some of them can get it to do things I didn’t realize were possible and that I freely admit are very cool. I don’t know how to do these things because I’m out of practice. But, I consider that good. If you’re getting a lot of practice de-bug-ing your code, it means you’re getting a lot of practice writing code with bugs in it.

So, let’s keep those methods small and get out of the practice of generating bugs.

(By the way, I’m going to be traveling overseas for the next couple of weeks, so this may be my last post for a while).

By

Preserve Developer Mindshare – Don’t Nitpick

Mindshare


I’m not particularly interested in marketing principles in the commercial sense of the word (though I find the psychology of argumentation and persuasion to be fascinating), so please excuse any failed parallelism in advance. Today, I want to talk about the concept of mind share, but to apply it to the life of a work-a-day developer.

For those not familiar with the concept, mind share is the awareness that a consumer has about a particular product. For instance, if I say “smart phone”, the first things that pop into your head are probably “iPhone”, “Android”, “Blackberry”, perhaps in exactly that order. If that’s the case, iPhone has a larger mindshare from your perspective as a consumer than Blackberry or Android.

Another concept that comes into play is referred to in the linked wikipedia article as “evoked set”. This refers to the set of items that you’ll think of at all without some kind of researching or prompting. In our example above, you didn’t think of Windows Mobile, and now that you read the name, you probably think, “oh yeah, them.” If that’s the case, your evoked set is the first three, and Windows Mobile is out in the cold.

But let’s come back to this later.

A Modest Proposal

The other day, I happened to overhear the substance of a code review. The code was some relatively minor set of changes, and so the suggested fixes and changes were also relatively minor and unremarkable, but with an interesting exception because of its newness to me. The reviewer requested that the developer use the Visual Studio utilities “Sort Usings” and “Organize Usings”. For those not familiar with .NET, this is the Java equivalent of sorting your package imports or the C++ equivalent of sorting/organizing #includes. The only difference is that in C#/.NET, this is functionally useless from the compiled code perspective. That is, C# took a lesson from its counterparts and had its compiler take care of this housekeeping. From a developer’s point of view, this only potentially has ramifications in terms of additional intellisense overhead.

Still, this struck me on the surface as a good practice, albeit one I had never really considered. I suppose that unused using statements are a form of dead code, having intellisense perform better is a mild plus, and sorting them is probably… nice, I guess… for those who ever inspect the using statements. I am not one of those people — I never write them because of Ctrl-Period, and I never look at them. I used to remove them because of the CodeRush issues list for a file, but I tend to turn that off since it tends to unceremoniously remove the Linq extension methods and leave me with non-compiling code (fingers crossed for a fix in a future version).

Back to the story, the reviewer then went on to state that this would be required to ‘pass’ any future code reviews that he did. In spite of the apparent tiny benefit conferred by this practice, something about this proclamation seemed a little off and problematic to me. But, it slipped out of my mind in favor of more pressing matters until I was going through the process of promoting some code in a different scenario the other day, and suddenly, the unfocused nagging issue leaped into full view for me.

Anatomy of a Code Promotion

Generally speaking, a developer’s task is a simple one: implement features, fix any defects. So, if you’re given a task to implement, you implement it, take a moment to pat yourself on the back and move on. And, that’s what I did during my epiphany. Except, er no, wait.

I was using Rational Clear Case, so what I actually had to do was finish the change, and then check in my code. From there, to promote it, I had to open up Clear Case explorer, find my view, right click, and say “Deliver from Stream to Default”. From there, I had to launch Clear Case Project Explorer, find the integration stream, and click “Make a Baseline”. The policy is to name the baseline the default appended with an underscore and my login name. After that, I had to recommend the baseline. Ugh (and double Ugh for Clear Case as source control). Suddenly my life as a developer is not so simple. That’s no longer a one step process, but some number greater than one depending on our standards for granularity.

But wait, crap. I didn’t run all the unit tests to make sure the build wasn’t broken (actually, I tend to be fanatical about that, but I’m making a rhetorical point). I also didn’t run style cop to make sure I was conforming to the set of coding standard we have, nor did I run my other static analysis tools to check for simple mistakes. Alright, so time to do all that, and re-deliver.

But wait. Clear Case forces a rebase operation prior to code delivery (the equivalent of SVN update). And, it’s generally good practice to run all of your tests and analysis tools prior to and after a rebase to make sure that you know whether you are responsible for any broken tests, standards violations, etc or whether you inherited them. Man, this is getting intense.

So alright, promotion process is check your code for correctness, run all tests, run all static analysis. Then, rebase and do all that again. Then, follow that whole rigmarole about delivery and making baselines. My goodness — I haven’t even considered that I might have forgotten to add a file, so I should probably grab a clean copy of everything from source control and rebuild and, if anything breaks, re-deliver. And I haven’t even mentioned the possibility of handling merge conflicts.

Oh, and I now need to sort and organize my using statements. That seemed like a decent idea a few paragraphs ago, but now…

(I realize that there are optimizations that could be made to this particular process — different source control, continuous integration, etc. Point is, just about every process has some warts and, even if it doesn’t, managing concurrent changes and standards in a group environment requires more thought than we realize as we get accustomed to the process.)

Mindshare Revisited

In the face of all of this stuff, the mindshare metaphor begs consideration. If fixing our defects and implementing our features isn’t the iPhone, we’re in big trouble. From there, running unit tests and static analysis tools probably ought to be Android and Blackberry, but they may get pushed out a bit in favor of the particulars of wrangling the source control system and resolving merge conflicts, depending on the source control system and merge tool.

As we add more things, we have two options. We can either reduce the mind share of existing things in our evoked set, or we can spend time and energy expanding our evoked set. So, if we want to hold our efficiency of feature implementation constant, we’re going to have to leave some things out of our mindshare (and then perhaps be reminded of them at code reviews or with exasperated emails from team members with different evoked sets than ours, which we trade for exasperated emails of our own at things missing from their evoked sets). Alternatively, if we want to expand our mindshare, it’s going to come at the cost of a steep learning curve for all newer members and decreased efficiency across the board as we go through our rote checklist prior to each delivery.

Getting It Right

I don’t care for either of these options. So, I have two suggestions for people as the number of sticky notes and strings around our fingers grows in order to promote code.

  1. Don’t sweat the small stuff.
  2. Automate as much as possible.

In the case of “organize and sort usings”, I’d offer item (1). Something that provides no benefit to the end-product and questionable benefit to the development environment is something that ought not to occupy our mindshare as developers. But, in case I am just flat out wrong in my assessment of the benefit/detriment analysis, I’d offer option (2). Given that this is already implemented in Visual Studio, a small plugin running on the build machine could ensure that the using statements in all checked in code are always optimized, without adding to the maze of things developers have to remember.

And, to expand on this, I’d suggest that we in general move as many things into the (2) camp as possible, if we value them. Things like coding standards, static analysis, best practices, etc do matter, so why not force them with automatic, gated checkins or code transforms on the build machine. That ensures they’re always right, and without forcing up front memorization and, more importantly, without distraction from the most important problem — “implement features and fix defects”. The closer to 100% of our mindshare that iPhone occupies, the better for all project stakeholders.

By

WordPress, Twenty-Ten and Image Resize

I discovered today that image resizing wasn’t working for the blog.  Amazingly, I’d never had an occasion where I cared about resizing an image until just now.  But, when I did, I discovered a frustrating thing.  In the “what you see is what you get” (WYSIWYG) editor for WordPress, the images were appearing correctly and looking good according to my sizing.  But in preview mode, they were rendering at their original size.

After a bit of poking around, I discovered that my style.css was defaulting image height and width to auto.  I removed this (under #content img), and everything was as I would expect.  This seems obvious if you’re just talking about the CSS of some page, but I didn’t think of it off the bat, since the theme’s handling of this seems to run completely counter to wordpress.