DaedTech

Stories about Software

By

Adding a Switch to Your Home Automation

Last time in this series, I covered turning a lamp on and off with a keychain. Today, I’m going to up the ante and describe how to control an overhead light with your same keychain. This will be accomplished by replacing the switch for the light that you want to control with a new switch that you’re going order.

The benefits include not only the ability to remotely control the light itself, but also adding dimming capabilities to previously non-dimmable lights. In addition, their “soft” on and off behavior dramatically increases the lifespan of incandescent bulbs.

Prerequisites

A few things to check off your list before getting started:

  1. The light to be controlled must be an incandescent light with wattage of at least 40. Do not use on fluorescents, halogens, etc!.
  2. You must have the transceiver from the last post or this will not work.
  3. Needle nose pliers, screw driver(s), wire nuts, wire strippers/cutters, voltage tester.

Also, and perhaps most importantly, you’ll need some degree of comfort changing out a wall socket. I assume absolutely no responsibility for what you do here. Working with home electricity can be very dangerous, and if you are the least bit uncomfortable doing this, I recommend hiring an electrician to wire up the outlet.

Making the purchase

The first thing that you’re going to need to do is order your part. You’re going to want to order the X10 Wall Switch Module (WS467). The price on this varies, but you can probably get it for anywhere from $12 to $20. It is pictured here:

I recommend this one because it will fit into the same space as most existing standard toggle switches. If you are looking to replace a rocker switch, you can order WS12A, which tends to be a bit more pricey, but such is the cost of elegance.

Installation

The unit itself will come with instructions, but here is my description from experience:

  1. First, remove the wall plate from the box just to peer in and make sure there isn’t some kind of crazy wiring scenario going on that may make you want to abort mission and/or call in an electrician. This might include lots of different wires using this switch as a stop along the way to other boxes or outlets in the house, for instance. Careful while you do this as your electricity is still live.
  2. Once you’re satisfied that the mission looks feasible, head to your circuit breaker and kill the power. Make sure power is off by using your voltage tester with one lead on the hot wire and the other on ground somewhere (if you didn’t figure out which was hot and which was ground, try both leads on the switch).
  3. Once you’re doubly sure power is no longer flowing, pop the screws on the switch itself that are holding it to the junction box and remove the switch.
  4. Your old switch should have two connected leads, and each of these should have one or more wires connected to each lead. At this point, you can basically just swap the old wireup for the new module’s wireup — the colors don’t matter (though I’ve adopted a convention with these guys of wiring black to hot). You’re going to need your needle nose pliers and wire nuts because unlike most existing outlets, the X10 have wires instead of terminal receptacles.
  5. Once you’ve twisted up the wires and made sure no bare wires are showing anywhere, settle the switch into the junction box and screw it back in. You’re now going to restore power at the breaker box.
  6. When power is back on, verify that pushing the button on the switch turns lights on and pushing it again turns them off. This switch will always be manually operable. Also, don’t be surprised that the lights come on more gradually than you’re used to (it may seem weird at first, but I love this, personally).
  7. Once everything is working properly, you can replace the wall plate. However, in continuation from our first post, what you’re going to want to do now is match the dials on the switch to one of the remote settings, in terms of house code and unit code, before you replace the wall plate. You can always change this later, but you might as well do it now while it’s already off

Operation

Now, you’ve got it installed, and it should work manually or from your keychain remote. Generally speaking, wall switches are always operational both manually and remotely. These are not rocker/toggle switches but pushbutton switches, so they have no concept of state. Each time you press the button either on the switch or from the remote, the command boils down to “do the opposite of what you’re currently doing.” That is, there’s no concept of manual or remote push overriding the other — it’s just flipping the light from whatever state it was last in.

In addition to this functionality, you also get dimming functionality. If you hold the button in instead of simply pushing it, the light’s brightness will change. Holding it in when the light is off causes it to ramp up from dim to bright, and the opposite is true when the light is on – it will ramp down. On top of this, the light will ‘remember’ its last brightness setting. So, if I turn the light on by gradually bringing it up from dark and then turn it off, the next time I turn it on, it will ‘remember’ its previous brightness.

One final note is that there is a manual slide below the main switch. This is, in effect, a kill switch. If you push that, you will completely break the circuit and no amount of remote presses or X10 signals can activate it in this state. This is the equivalent of unplugging your transceiver module from the last post. If power is cut, the signals you send don’t matter.

Recap

So, now you’ve got a remote control capable of controlling two lights in the house. One is a lamp and the other an overhead light that can now be dimmed and brightened manually, as well as turned on and off remotely. (Next time I install a switch, I will take some pictures of what I’m doing and add them to the instructions above).

By

Composite

Quick Information/Overview

Pattern Type Structural
Applicable Language/Framework Agnostic OOP
Pattern Source Gang of Four
Difficulty Easy

Up Front Definitions

  1. Component: This is the abstract object that represents any and all members of the pattern
  2. Composite: Derives from component and provides the definition for the class that contains other components
  3. Leaf: A concrete node that encapsulates data/behavior and is not a container for
    other components.
  4. Tree: For our context here, tree describes the structure of all of the components.

The Problem

Let’s say that you get a call in to write a little console application that allows users to make directories, create empty files, and delete the same. The idea is that you can create the whole structure in memory and then dump it to disk in one fell swoop. Marketing thinks this will be a hot seller because it will allow users to avoid the annoyance of disk I/O overhead when planning out a folder structure (you might want to dust off your resume, by the way).

Simple enough, you say. For the first project spring, you’re not going to worry about sub-directories – you’re just going to create something that will handle files and some parent directory.

So, you create the following class (you probably create it with actual input validation, but I’m trying to stay focused on the design pattern):

public class DirectoryStructure
{
    private const string DefaultDirectory = ".";

    private readonly List _filenames = new List();

    private string _directory = DefaultDirectory;

    public string RootDirectory { get { return _directory; } set { _directory = string.IsNullOrEmpty(value) ? DefaultDirectory : value; } }

    public void AddFile(string filename)
    {
        _filenames.Add(filename);
        _filenames.Sort();
    }

    public void DeleteFile(string filename)
    {
        if (_filenames.Contains(filename))
        {
            _filenames.Remove(filename);
        }
    }

    public void PrintStructure()
    {
        Console.WriteLine("Starting in " + RootDirectory);
        _filenames.ForEach(filename => Console.WriteLine(filename));
    }

    public void CreateOnDisk()
    {
        if (!Directory.Exists(RootDirectory))
        {
            Directory.CreateDirectory(RootDirectory);
        }

        _filenames.ForEach(filename => File.Create(filename));
    }
}

Client code is as follows:

static void Main(string[] args)
{
    var myStructure = new DirectoryStructure();

    myStructure.AddFile("file1.txt");
    myStructure.AddFile("file2.txt");
    myStructure.AddFile("file3.txt");
    myStructure.DeleteFile("file2.txt");

    myStructure.PrintStructure();

    Console.Read();

    myStructure.CreateOnDisk();
}

That’s all well and good, so you ship and start work on sprint number 2, which includes the story that users want to be able to create one level of sub-directories. You do a bit of refactoring, deciding to make root a constructor parameter instead of a settable property, and then get to work on this new story.

You wind up with the following:

public class DirectoryStructure
{
    private const string DefaultDirectory = ".";

    private readonly List _filenames = new List();
    private readonly List _directories = new List();
    private readonly Dictionary> _subDirectoryFilenames = new Dictionary>();

    private readonly string _root;

    public DirectoryStructure(string root = null)
    {
        _root = string.IsNullOrEmpty(root) ? DefaultDirectory : root;
    }


    public void AddFile(string filename)
    {
        _filenames.Add(filename);
        _filenames.Sort();
    }

    public void AddFile(string subDirectory, string filename)
    {
        if (!_directories.Contains(subDirectory))
        {
            AddDirectory(subDirectory);
        }
        _subDirectoryFilenames[subDirectory].Add(filename);
    }

    public void AddDirectory(string directoryName)
    {
        _directories.Add(directoryName);
        _subDirectoryFilenames[directoryName] = new List();
    }

    public void DeleteDirectory(string directoryName)
    {
        if (_directories.Contains(directoryName))
        {
            _directories.Remove(directoryName);
            _subDirectoryFilenames[directoryName] = null;
        }
    }

    public void DeleteFile(string filename)
    {
        if (_filenames.Contains(filename))
        {
            _filenames.Remove(filename);
        }
    }

    public void DeleteFile(string directoryName, string filename)
    {
        if (_directories.Contains(directoryName) && _subDirectoryFilenames[directoryName].Contains(filename))
        {
            _subDirectoryFilenames[directoryName].Remove(filename);
        }
    }

    public void PrintStructure()
    {
        Console.WriteLine("Starting in " + _root);
        foreach (var myDir in _directories)
        {
            Console.WriteLine(myDir);
            _subDirectoryFilenames[myDir].ForEach(filename => Console.WriteLine("\t" + filename));
        }
        _filenames.ForEach(filename => Console.WriteLine(filename));
    }

    public void CreateOnDisk()
    {
        if (!Directory.Exists(_root))
        {
            Directory.CreateDirectory(_root);
        }

        foreach (var myDir in _directories)
        {
            Directory.CreateDirectory(Path.Combine(_root, myDir));
            _subDirectoryFilenames[myDir].ForEach(filename => File.Create(Path.Combine(myDir, filename)));
        }
        _filenames.ForEach(filename => File.Create(filename));
    }
}

and client code:

static void Main(string[] args)
{
    var myStructure = new DirectoryStructure();

    myStructure.AddFile("file1.txt");
    myStructure.AddDirectory("firstdir");
    myStructure.AddFile("firstdir", "hoowa");
    myStructure.AddDirectory("seconddir");
    myStructure.AddFile("seconddir", "hoowa");
    myStructure.DeleteDirectory("seconddir");
    myStructure.AddFile("file2.txt");
    myStructure.AddFile("file3.txt");
    myStructure.DeleteFile("file2.txt");

    myStructure.PrintStructure();

    Console.Read();

    myStructure.CreateOnDisk();
}

Yikes. That’s starting to smell. You’ve had to add overloads for adding file and deleting file, add methods for add/delete directory, and append logic to print and create. Basically, you’ve had to either touch or overload every method in the class. Generally, that’s a surefire sign that you’re doin’ it wrong. But, no time for that now because here comes the third sprint. This time, the business wants two levels of nesting. So, you get started and you see just how ugly things are going to get. I won’t provide all of your code here so that the blog can keep a PG rating, but here’s the first awful thing that you had to do:

private readonly List _filenames = new List();
private readonly List _directories = new List();
private readonly Dictionary> _subDirectoryFilenames = new Dictionary>();
private readonly Dictionary> _subDirectoryNames;
private readonly Dictionary>> _subSubDirectoryFilenames = new Dictionary>>();

You also had to modify or overload every method yet again, bringing the method total to 12 and the complexity of each method to a larger figure. You’re pretty sure you can’t keep this up for an arbitrary number of sprints, so you send out your now-dusted off resume and foist this stinker on the hapless person replacing you.

So, What to Do?

What went wrong here is relatively easy to trace. Let’s backtrack to the start of the second sprint when we needed to support sub-directories. As we’ve seen in some previous posts in this series, the first foray at implementing the second sprint gives off the code smell, primitive obsession. This is a code smell wherein bunches of primitive types (string, int, etc) are used in ad-hoc fashion to operate as some kind of ad-hoc class.

In this case, the smell is coming from the series of lists and dictionaries centering around strings to represent file and directory names. As the sprints go on, it’s time to recognize that there is a need for at least one class here, so let’s create it and call it “SpeculativeDirectory” (so as not to confuse it with the C# Directory class).

public class SpeculativeDirectory
{
    private const string DefaultDirectory = ".";

    private readonly HashSet _subDirectories = new HashSet();

    private readonly HashSet _files = new HashSet();

    private readonly string _name = string.Empty;
    public string Name { get { return _name; } }

    public SpeculativeDirectory(string name)
    {
        _name = string.IsNullOrEmpty(name) ? DefaultDirectory : name;
    }

    public SpeculativeDirectory GetDirectory(string directoryName)
    {
        return _subDirectories.FirstOrDefault(dir => dir.Name == directoryName);
    }

    public string GetFile(string filename)
    {
        return _files.FirstOrDefault(file => file == filename);
    }

    public void Add(string file)
    {
        if(!string.IsNullOrEmpty(file))
            _files.Add(file);
    }

    public void Add(SpeculativeDirectory directory)
    {
        if (directory != null && !string.IsNullOrEmpty(directory.Name))
        {
            _subDirectories.Add(directory);
        }
    }

    public void Delete(string file)
    {
        _files.Remove(file);
    }

    public void Delete(SpeculativeDirectory directory)
    {
        _subDirectories.Remove(directory);
    }

    public void PrintStructure(int depth)
    {
        string myTabs = new string(Enumerable.Repeat('\t', depth).ToArray());
        Console.WriteLine(myTabs + Name);

        foreach (var myDir in _subDirectories)
        {
            myDir.PrintStructure(depth + 1);
        }
        foreach (var myFile in _files)
        {
            Console.WriteLine(myTabs + "\t" + myFile);
        }
    }

    public void CreateOnDisk(string path)
    {
        string myPath = Path.Combine(path, Name);

        if (!Directory.Exists(myPath))
        {
            Directory.CreateDirectory(myPath);
        }

        _files.ToList().ForEach(file => File.Create(Path.Combine(myPath, file)));
        _subDirectories.ToList().ForEach(dir => dir.CreateOnDisk(myPath));
    }

}

And, the DirectoryStructure class is now:

public class DirectoryStructure
{
    private readonly SpeculativeDirectory _root;

    public DirectoryStructure(string root = null)
    {
        _root = new SpeculativeDirectory(root);
    }


    public void AddFile(string filename)
    {
        _root.Add(filename);
    }

    public void AddFile(string directoryName, string filename)
    {
        var myDirectory = _root.GetDirectory(directoryName);
        if (myDirectory != null)
        {
            myDirectory.Add(filename);
        }
    }

    public void AddDirectory(string directoryName)
    {
        _root.Add(new SpeculativeDirectory(directoryName));
    }

    public void DeleteFile(string filename)
    {
        _root.Delete(filename);
    }

    public void DeleteFile(string directoryName, string filename)
    {
        var myDirectory = _root.GetDirectory(directoryName);
        if (myDirectory != null)
        {
            myDirectory.Delete(filename);
        }
    }

    public void DeleteDirectory(string directoryName)
    {
        _root.Delete(directoryName);
    }

    public void PrintStructure()
    {
        _root.PrintStructure(0);
    }

    public void CreateOnDisk()
    {
        _root.CreateOnDisk(string.Empty);
    }
}

The main program that invokes this class is unchanged. So, now, notice the Structure of SpeculativeDirectory. It contains a collection of strings, representing files, and a collection of SpeculativeDirectory, representing sub-directories. For things like PrintStructure() and CreateOnDisk(), notice that we’re now taking advantage of recursion.

This is extremely important because what we’ve done here is future proofed for sprint 3 much better than before. It’s still going to be ugly and involve more and more overloads, but at least it won’t require defining increasingly nested (and insane) dictionaries in the DirectoryStructure class.

Speaking of DirectoryStructure, does this class serve a purpose anymore? Notice that the answer is “no, not really”. It basically defines a root directory and wraps its operations. So, let’s get rid of that before we do anything else.

To do that, we can just change the client code to the following and delete DirectoryStructure:

static void Main(string[] args)
{
    var myDirectory = new SpeculativeDirectory(".");
    myDirectory.Add("file1.txt");
    myDirectory.Add(new SpeculativeDirectory("firstdir"));
    myDirectory.GetDirectory("firstdir").Add("hoowa");
    var mySecondDir = new SpeculativeDirectory("seconddir");
    myDirectory.Add(mySecondDir);
    myDirectory.GetDirectory("seconddir").Add("hoowa");
    myDirectory.Delete(mySecondDir);
    myDirectory.Add("file2.txt");
    myDirectory.Add("file3.txt");
    myDirectory.Delete("file2.txt");

    myDirectory.PrintStructure(0);

    Console.Read();

    myDirectory.CreateOnDisk(".");

}

Now, we’re directly using the directory object and we’ve removed a class in addition to cleaning things up. The API still isn’t perfect, but we’re gaining some ground. So, let’s turn our attention now to cleaning up SpeculativeDirectory. Notice that we have a bunch of method pairs: GetDirectory/GetFile, Add(Directory)/Add(string), Delete(Directory)/Delete(string). This kind of duplication is a code smell — we’re begging for polymorphism here.

Notice that we are performing operations routinely on SpeculativeDirectory and performing the same operations on the string representing a file. It is worth noting that if we had a structure where file and directory inherited from a common base or implemented a common interface, we could perform operations on them just once. And, as it turns out, this is the crux of the command pattern.

Let’s see how that looks. First, we’ll define a SpeculativeFile object:

public class SpeculativeFile
{
    private readonly string _name;
    public string Name { get; }

    public SpeculativeFile(string name)
    {
        _name = name ?? string.Empty;
    }

    public void Print(int depth)
    {
        string myTabs = new string(Enumerable.Repeat('\t', depth).ToArray());
        Console.WriteLine(myTabs + Name);
    }

    public void CreateOnDisk(string path)
    {
        File.Create(Path.Combine(path, _name));
    }
}

This is pretty simple and straightforward. The file class knows how to print itself and how to create itself on disk, and it knows that it has a name. Now our task is to have a common inheritance model for file and directory. We’ll go with an abstract base class since they are going to have common implementations and file won’t have an implementation, per se, for add and delete. Here is the common base:

public abstract class SpeculativeComponent
{
    private readonly string _name;
    public string Name { get { return _name; } }

    private readonly HashSet _children = new HashSet();
    protected HashSet Children { get { return _children; } }

    public SpeculativeComponent(string name)
    {
        _name = name ?? string.Empty;
    }

    public virtual SpeculativeComponent GetChild(string name) { return null; }

    public virtual void Add(SpeculativeComponent component) { }

    public virtual void DeleteByName(string name) { }

    public void Print()
    {
        Print(0);
    }

    public void CreateOnDisk()
    {
        CreateOnDisk(Name);
    }

    protected virtual void Print(int depth)
    {
        string myTabs = new string(Enumerable.Repeat('\t', depth).ToArray());
        Console.WriteLine(myTabs + Name);

        foreach (SpeculativeComponent myChild in _children)
        {
            myChild.Print(depth + 1);
        }
    }

    protected virtual void CreateOnDisk(string path)
    {
        foreach (var myChild in _children)
        {
            myChild.CreateOnDisk(Path.Combine(path, Name));
        }
    }
        
}

A few things to note here. Fist of all, our recursive Print() and CerateOnDisk() methods are divided into two methods each, one public, one private. This is continue to allow for recursive calls but without awkwardly forcing the user to pass in zero or empty or whatever for path/depth. Notice also that common concerns for the two different types of nodes (file and directory) are now here, some stubbed as do-nothing virtuals and others implemented. The reason for this is conformance to the pattern — while files and directories share some overlap, some operations are clearly not meaningful for both (particularly adding/deleting and anything else regarding children). So, you do tend to wind up with the leaves (SpeculativeFile) ignoring inherited functionality, this is generally a small price to pay for avoiding duplication and the ability to recurse to n levels.

With this base class, we have pulled a good bit of functionality out of the file class, which is now this:

public class SpeculativeFile : SpeculativeComponent
{
    public SpeculativeFile(string name) : base(name) {}

    protected override void CreateOnDisk(string path)
    {
        File.Create(Path.Combine(path, Name));
        base.CreateOnDisk(path);
    }
}

Pretty simple. With this new base class, here is new SpeculativeDirectory class:

public class SpeculativeDirectory : SpeculativeComponent
{
    public SpeculativeDirectory(string name) : base(name) { }

    public override SpeculativeComponent GetChild(string name)
    {
        return Children.FirstOrDefault(child => child.Name == name);
    }

    public override void Add(SpeculativeComponent child)
    {
        if(child != null)
            Children.Add(child);
    }

    public override void DeleteByName(string name)
    {
        var myMatchingChild = Children.FirstOrDefault(child => child.Name == name);
        if (myMatchingChild != null)
        {
            Children.Remove(myMatchingChild);
        }
    }

    protected override void CreateOnDisk(string path)
    {
        string myPath = Path.Combine(path, Name);
        if (!Directory.Exists(myPath))
        {
            Directory.CreateDirectory(myPath);
        }

        base.CreateOnDisk(path);
    }
}

Wow. A lot more focused and easy to reason about, huh? And, finally, here is the new API:

static void Main(string[] args)
{
    var myDirectory = new SpeculativeDirectory(".");
    myDirectory.Add(new SpeculativeFile("file1.txt"));
    myDirectory.Add(new SpeculativeDirectory("firstdir"));
    myDirectory.GetChild("firstdir").Add(new SpeculativeFile("hoowa"));
    myDirectory.Add(new SpeculativeDirectory("seconddir"));
    myDirectory.GetChild("seconddir").Add(new SpeculativeFile("hoowa"));
    myDirectory.DeleteByName("seconddir");
    myDirectory.Add(new SpeculativeFile("file2.txt"));
    myDirectory.Add(new SpeculativeFile("file3.txt"));
    myDirectory.DeleteByName("file2.txt");

    myDirectory.Print();

    Console.Read();

    myDirectory.CreateOnDisk();

}

Even the API has improved since our start. We’re no longer creating this unnatural “structure” object. Now, we just create root directory and add things to it with simple API calls in kind of a fluent interface.

Now, bear in mind that this is not as robust as it could be, but that’s what you’ll do in sprint 3, since your sprint 2 implemented sub-directories for N depths of recursion and not just one. 🙂

A More Official Explanation

According to dofactory, the Composite Pattern’s purpose is to:

Compose objects into tree structures to represent part-whole hierarchies. Composite lets clients treat individual objects and compositions of objects uniformly.

What we’ve accomplished in rescuing the speculative directory creation app is to allow the main program to perform operations on nodes in a directory tree without caring whether they are files or directories (with the exception of actually creating them). This is most evident in the printing and writing to disk. Whether we created a single file or an entire directory hierarchy, we would treat it the same for creating on disk and for printing.

The elegant concept here is that we can build arbitrarily large structures with arbitrary conceptual tree structures and do things to them uniformly. This is important because it allows the encapsulation of tree node behaviors within the objects themselves. There is no master object like the original DirectoryStructure that has to walk the tree, deciding how to treat each element. Any given node in the tree knows how to treat itself and its sub-elements.

Other Quick Examples

Other places where one might use the composite pattern include:

  1. GUI composition where GUI widgets can be actual widgets or widget containers (Swing java, WPF XAML, etc).
  2. Complex Chain of Responsibility structures where some nodes handle events and others simply figure out who to hand them over to
  3. A menu structure where nodes can either be actions or sub-menus.

A Good Fit – When to Use

The pattern is useful when (1) you want to represent an object hierarchy and (2) there is some context in which you want to be able to ignore differences between the objects and treat them uniformly as a client. This is true in our example here in the context of printing the structure and writing it to disk. The client doesn’t care whether something is a file or a directory – he just wants to be able to iterate over the whole tree performing some operation.

Generally speaking, this is good to use anytime you find yourself looking at a conceptual tree-like structure and iterating over the whole thing in the context of a control flow statement (if or case). In our example, this was achieved indirectly by different method overloads, but the result in the end would have been the same if we had been looking at a single method with a bit of logic saying “if node is a file, do x, otherwise, do y”.

Square Peg, Round Hole – When Not to Use

There are some subtle considerations here. You don’t want to use this if all of your nodes can be the same type of object, such as the case of some simple tree structure. If you were, for example, creating a sorted tree for fast lookup, where each node had some children and a payload, this would not be an appropriate use of composite.

Another time you wouldn’t want to use this is if a tree structure is not appropriate for representing what you want to do. If our example had not had any concept of recursion and were only for representing a single directory, this would not have been appropriate.

So What? Why is this Better?

The code cleanup here speaks for itself. We were able to eliminate a bunch of method overloads (or conditional branching if we had gone that way), making the code more maintainable. In addition, it allows elimination of a structure that rots as it grows — right out of the box, the composite pattern with its tree structure allows handling of arbitrarily deep and wide tree structures. And, finally, it allows clients to walk the tree structure without concerning themselves with what kind of nodes its processing and how to navigate to the next nodes.

By

Fixing the Crackle and Pop in Ubuntu Sound

So, as I blogged previously, I’ve been re-appropriating some old PCs and sticking Ubuntu 11.10 on them. I had one doing exactly what I wanted it to do in the context of my home automation network with one small but annoying exception. Whenever I would play music through headphones or speakers, there would be a crackling noise. A google search turned up a lot of stuff that I had to weed through, but ultimately, this is what did the trick:

I opened /etc/pulse/default.pa and found the line

load-module module-udev-detect

This was located in an “ifexists” clause. It should be replaced by this:

load-module module-udev-detect tsched=0

From there, reboot. I can’t speak to whether this will fix everyone’s crackle or just some people’s, and I can’t speak to why this setting isn’t enabled by default, but c’est la vie. This is what did the trick for me, and hopefully it fixes the problem for someone else as well.

By

Writing Maintainable Code Demands Creativity

Writing maintainable code is for “Code Monkeys”?

This morning, I read an interesting blog post from Erik McClure. The premise of the post is that writing maintainable code is sufficiently boring and frustrating as to discourage people from the programming profession. A few excerpts from the post are:

There is an endless stream of rheteric discussing how dangerous writing clever code is, and how good programmers don’t write clever code, or fast code, but maintainable code, that can’t have clever for loop statements or fancy tricks. This is wrong – good codemonkeys write maintainable code.

and

What I noticed was that it only became intolerable when I was writing horrendously boring, maintainable code (like unit tests). When I was programming things I wasn’t supposed to be programming, and solving problems in ways I’m not supposed to, my creativity had found its outlet. Programming can not only be fun, but real programming is, itself, an art, a solution to a problem that itself embodies a certain elegance. Society in general seems to be forgetting what makes us human in the wake of a digital revolution that automatizes menial tasks. Your creativity is the most valuable thing you have.

When you write a function that does things the right way, when you refactor a clever subroutine to something that conforms to standards, you are tearing the soul out of your code. Bit by bit, you forget who you are and what the code means. The standards may enforce maintainable and well-behaved code, but it does so at the cost of your individuality. Coding becomes middle-school math, where you have to solve the same, reworded problem a hundred times to get a good grade so you can go do something else that’s actually useful. It becomes a means to an end, not an adventure in and of itself.

Conformity for the Sake of Maintainability

There seem to be a few different themes here. The first one I see is one with which I have struggled myself in the past: chafing at being forced to change the way you do things to conform to a group convention. I touched on that here and here. The reason that I mention this is the references to “[conforming] to standards” and the apparent justification of those standards being that they make code “maintainable”. The realpolitik of this situation is such that it doesn’t really matter what justification is cited (appeal to maintainability, appeal to convention, appeal to anonymous authority, appeal to named authority, appeal to threat, etc). In the end, it boils down to “because I said so”. I mention this only insofar as I will dismiss this theme as not having much to do with maintainability itself. Whatever Erik was forced to do may or may not have actually had any bearing whatsoever on the maintainability of the code (i.e. “maintainability” could have been code for “I just don’t like what you did, but I should have an official sounding reason”).

Maintainable Code is Boring Code

So, on to the second theme, which is that writing maintainable code is boring. In particular Erik mentions unit tests, but I’d hazard a guess that he might also be talking about small methods, SRP classes, and other clean coding principles. And, I actually agree with him to an extent. Writing code like that is uneventful in some ways that people might not be used to.

That is, say that you don’t perform unit tests, and you write large, coupled, complex classes and methods. When you fire up your application for the first time after a few hours of coding, that’s pretty exciting. You have no idea what it’s going to do, though the smart money is on “crash badly”. But, if it doesn’t, and it runs, that’s a heady feeling and a rush — like winning $100 in the lottery. The work is also interesting because you’re going to be spending lots of time in the debugger, writing bunches of local variables down on paper to keep them straight. Keeping track of all of the strands of your code requires full concentration, and there’s a feeling of incredible accomplishment when you finally track down that needle in-a-haystack bug after an all-nighter.

On the flip side, someone who writes a lot of tests and conforms to the clean code/craftsmanship mantra has a less exciting life. If you truly practice TDD, the first time you fire up the application, you already know it’s going to work. The lottery-game excitement of longshot odds with high payoff is replaced by a dependable salary. And, as for the maddening all-nighter bugs, those too are gone. You can pretty much reproduce a problem immediately, and solve it just as quickly with an additional failing test that you make pass. The underdog, down by a lot all game, followed by miracle comeback is replaced by a game where you’re winning handily from wire to wire. All of the roller-coaster highs and lows with their panicked all nighters and miracle finishes are replaced by you coming in at 9, leaving at 5, and shipping software on time or ahead of schedule.

Making Code Maintainable Is Brainless

The third main theme that I saw was the idea that writing clever code and maintainable code is mutually exclusive, and that the latter is brainless. Presumably, this is colored to some degree by the first theme, but on its own, the implication is that maintainable code is maintainable because it is obtuse and insufficient to solve the problem. That is, instead of actually solving problems that they’re tasked with, the maintainable-focused drones oversimplify the problem and settle for meeting most, if not all of the requirements of the software.

I say this because of Erik’s vehement disagreement with the adage that roughly says “clever code is bad code”. I’ve seen this pithy expression explained in more detail by people like Uncle Bob (Robert Martin) and I know that it requires further explanation because it actually sounds discouraging and draconian stated simply. (Though, I think this is the intended, provocative effect to make the reader/listener demand an explanation). But, taken at face value I would agree with Erik. I don’t relish the thought of being paid a wage to keep quiet and do stupid things.

Maintainability Reconsidered

Let’s pull back for a moment and consider the nature of software and software development. In his post, Erik bleakly points out that software “automatizes[sic] menial tasks”. I agree with his take, but with a much more optimistic spin — I’m in the business of freeing people from drudgery. But, either way, there can be little debate that the vast majority of software development is about automating tasks — even game development, which could be said to automate pretend playground games, philosophically (kids don’t need to play “cops and robbers” on the playground using sticks and other makeshift toys when games about detectives and mobsters automate this for them).

And, as we automate tasks, what we’re doing is taking tasks that have some degree of intrinsic complexity and falling on the grenade of that complexity so that completing the task is simple, intuitive, and even pleasant (enter user experience and graphic design) for prospective users. So, as developers, we deal with complexity so that end users don’t have to. We take complicated processes and model them, and simplify them without oversimplifying them. This is at the heart of software development and it’s a task so complex that all manner of methodologies, philosophies, technologies, and frameworks have been invented in an attempt to get it right. We make the complex simple for our non-technical end users.

Back in the days when software solved simpler problems than it does now, things were pretty straightforward. There were developers and there were users. Users didn’t care what the code looked like internally, and developers generally operated alone or perhaps in small teams for any given task. In this day and age, end-users still don’t care what the code looks like, but development teams are large, sometimes enormous, and often distributed geographically, temporally, and according to specialty. You no longer have a couple of people that bang out all necessary code for a project. You have library writers, database people, application coders, graphic designers, maintenance programmers etc.

With this complexity, an interesting paradigm has emerged. End-users are further removed, and you have other, technical users as well. If you’re writing a library or an IDE plugin, your users are other programmers. If you’re writing any code, the maintenance programmer that will come along later is one of your users. If you’re an application developer, a graphic designer is one of your users. Sure, there are still end-users, but there are more stakeholders now to consider than there used to be.

In light of this development, writing code that is hard to maintain and declaring that this is just how you do things is a lot like writing a piece of code with an awful user interface and saying to your users “what do you want from me — it works, doesn’t it?” You’re correct, and you’re destined to go out of business. If I have a programmer on my team who consistently and proudly writes code that only he understands and only he can decipher, I’m hoping that he’s on another team as soon as possible. Because the fact of the matter is that anybody can write code that meets the requirements, but only a creative, intelligent person can do it in such a way that it’s quickly understood by others without compromising the correctness and performance of the solution.

Creativity, Cleverness and Maintainability

Let’s say that I’m working and I find code that sorts elements in a list using bubble sort, which is conceptually quite simple. I decide that I want to optimize, so I implement quick sort, which is more complex. One might argue that I’m being much more clever, because quick sort is a more elegant solution. But, quicksort is harder for a maintenance programmer to grasp. So, is the solution to leave bubble sort in place for maintainability? Clearly not, and if someone told Erik to do that, I understand and empathize with his bleak outlook. But then, the solution also isn’t just to slap quicksort in and call it a day either. The solution is to take the initial implementation and break it out into methods that wrap the various control structures and have descriptive names. The solution is to eliminate one method with a bunch of incrementers and decrementers in favor of several with more clearly defined scope and purpose. The solution is, in essence, to teach the maintenance programmer quicksort by making your quicksort code so obvious and so readable that even the daft could grasp it.

That is not easy to do. It requires creativity. It requires knowing the subject matter not just well enough to get it right through trial and error, not just well enough to know it cold and not just well enough to explain it to the brightest pupil, but well enough that your explanation shines through the code and is unmistakable. In other words, it requires creativity, mastery, intelligence, and clarity of thought.

And, when you do things this way, unit tests and other ‘boring’ code become less boring. They let you radically alter the internal mechanism of an algorithm without changing its correctness. They let you conduct time trials on the various components as you go to ensure that you’re not sacrificing performance. And, they document further how to use your code and clarify its purpose. They’re no longer superfluous impositions but tools in your arsenal for solving problems and excelling at what you do. With a suite of unit tests and refactored code, you’re able to go from bubble sort to quick sort knowing that you’ll get immediate feedback if something goes wrong, allowing you to focus exclusively on a slick algorithm. They’ll even allow you to go off tilting at tantalizing windmills like an unprecedented linear time sorting — hey, if the tests run in N time and all go green, you’re up for some awards and speaking engagements. For all that cleverness, you ought to get something.

So Why is “Cleverness” Bad?

What do they mean that cleverness is bad, anyway? Why say something like that? The aforementioned Bob Martin, in a video presentation I once watched, said something like “you know you’re looking at good code when someone reading it is saying ‘uh-huh’, ‘uh-huh’, ‘yep’, ‘makes sense’, ‘uh-huh'”. Contrast this with code that you see where your first reaction is “What on Earth…?!?” That is often the reaction to non-functional or incorrect code, but it is just as frequently the reaction to ‘clever’ code.

The people who believe in the idea of avoiding ‘clever’ code are talking about Rube-Goldbergian code, generally employed to work around some hole in their knowledge. This might refer to someone who defines 500 functions containing 1 through 500 if statements because he isn’t aware of the existence of a “for” loop. It may be someone who defines and heavily uses something called IndexedList because he doesn’t know what an array is. It may be this, this, this or this. I’ve seen this in the wild, where I was looking at someone’s code and I said to myself, “for someone who didn’t know what a class is, this is a fairly clever oddball attempt to replicate the concept.”

The term ‘clever’ is very much tongue-in-cheek in the context of clean coding and programming wisdom. It invariably means a quixotic, inferior solution to a problem that has already been solved and whose solution is common knowledge, or it means a needlessly complex, probably flawed way of doing something new. Generally speaking, the only person who thinks that it is actually clever, sans quotes, is the person who did it and is proud of it. If someone is truly breaking new ground, that solution won’t be described as ‘clever’ in this sense, but probably as “innovative” or “ground-breaking” or “impressive”. Avoiding ‘clever’ implementations is about avoiding pointless hacks and bad reinventions of the wheel — not avoiding using one’s brain. If I were coining the phrase, I’d probably opt for “cute” over “clever”, but I’m a little late to the party. Don’t be cute — put the considerable brainpower required for elegant problem solving to problems that are worth solving and that haven’t already been solved.

By

Links As Buttons With CSS

I’ve recently started working on my home automation web server in earnest, and am trying to give it a nice look and feel. This is made more interesting by the fact that I’m developing a site intende specifically to be consumed by desktops, laptops, tablets and phones alike. With these constraints, it becomes important to have big, juicy click targets for users since there’s nothing more irritating than trying to “click” tiny hyperlinks on a phone.

To do this, I decided that what I wanted was a button. But, I’m using Spring MVC and I want to avoid form submissions for navigation and handle that through hyperlinking. So, after some experimentation, I came up with the following via a composite of things on other sites and my own trial and error tweaking:

/* This is for big content nav buttons */
a.button {
     padding: 5px 10px;
     background: -webkit-gradient(linear, left top, left bottom, from(#CC6633), to(#CC3300));
     background: -moz-linear-gradient(top,  #CC6633,  #CC3300);
     -moz-border-radius: 16px;
     -webkit-border-radius: 16px;
     text-shadow: 1px 1px #666;
     color: #fff;
     height: 50px;
     width: 120px;
     text-decoration: none;
}

table a.button {
    display: block;  
    margin-right: 20px;  
    width: 140px;  
    font-size: 20px;  
    line-height: 44px;  
    text-align: center;  
    text-decoration: none;  
    color: #bbb;  
}

a.button:hover {
     background: #CC3300;
}
a.button:active {
     background: -webkit-gradient(linear, left top, left bottom, from(#CC3300), to(#CC6633));
     background: -moz-linear-gradient(top,  #CC3300,  #CC6633);
}

/* These button styles are for navigation buttons - override the color scheme for normal buttons */
a.headerbutton {
    padding: 5px 10px;
    background: -webkit-gradient(linear, left top, left bottom, from(#1FA0E0), to(#072B8A));
    background: -moz-linear-gradient(top,  #1FA0E0,  #072B8A);
    -moz-border-radius: 16px;
    -webkit-border-radius: 16px;
    text-shadow: 1px 1px #666;
    color: #fff;
    height: 50px;
    width: 120px;
    text-decoration: none;
}

Nothing here is really rocket science, but it’s kind of handy to see it all in one place. The main things that I wanted to note were the gradients and border radii to make it look pretty, the webkit/mozilla stuff to make it compatible with versions of browsers not yet supporting HTML 5, shadowing and fixed height/width to complete the button effect.

In terms of client code, I’m invoking this from jsp pages:

Say Hello Sample

And, that’s about it. Probably old hat for many, but I haven’t done any web development for a while, so this was handy for me, and it’s nice to have as reference. Hopefully, it helps someone else as well.

Edit: Forgot to include the screenshot when I posted this earlier. Here’s the end result (and those are normal a tags):