DaedTech

Stories about Software

By

Getting Started With Android Development

I posted some time back about developing for Android and getting set up with the Android SDK, the Eclipse plugin, and all that. For the last six months, I haven’t really had time to get back to that. But now I’m starting to delve into Android development in earnest, so this (and potentially other upcoming posts) is going to be about my experience starting to write an Android application. I think I can offer some interesting perspective here. I am an experienced software developer with breadth of experience as well as depth in some technologies, but I am completely new to Android SDK. Hopefully, my experiences and overcome frustrations can help people in a similar position. This also means that you’d be learning along with me–it’s entirely possible that some of the things I post may be wrong, incomplete, or misguided in the beginning.

This post kind of assumes that your knowledge is like mine. I have a good, if a bit rusty from a year and a half of disuse, working knowledge of Eclipse and J2EE development therein. I’m also familiar with web development and WPF, so the concept of object-oriented plumbing code with a declarative markup layout for the view is quite familiar to me.

Notes about Setup

Just as a bit of background, I do have some things set up that I’m not going to bother going through in this particular post. I have Eclipse installed and configured, running on a Windows XP Pro machine. I also have, at my disposal, a Samsung Epic 4G running Android 2.3. (I forget the name of the food that accompanies this version, and, to be perfectly honest, something strikes me as sort of lame about naming your releases after desserts. Different strokes, I guess.) I also have installed ADB and the drivers necessary for connecting my computer to my phone. And finally, I have the Android virtual machine emulator, though I think that just comes with the Eclipse SDK plugin or something. I don’t recall having to do anything to get that going.

Creating a new Android project

One of the things that’s difficult when you’re new to some kind of development framework is separating what actually matters to your basic activities and what doesn’t. Any framework-oriented development, in contrast to, say, writing a C file and compiling it from the command line with GCC, dumps a lot of boilerplate on you. It’s hard to sort out what actually matters at first, especially if your “training” consists of internet tutorials and trial and error. So I’m going to note here what actually turned out to matter in creating a small, functional app, and what didn’t so far.

When you create a new project, you get a “SRC” folder that will contain your Java classes. This is where you’re going to put a class that inherits from “Activity” in order to actually have an application. I’ll get to activities momentarily. There’s also a “Gen” folder that contains auto-generated Java files. This is not important to worry about. Also not important so far are the “assets” folder and the Android2.1-update1 folder containing the Android jar. (Clearly this is quite important from a logistical perspective, as the Android library is necessary to develop Android apps, but it makes no difference to what you’re actually doing.)

The res folder is where things get a little interesting. This is where all of the view layer stuff goes on. So if you’re a J2EE web developer, this is the equivalent of the folder with your JSPs. If you’re a WPF/Silverlight developer, this is the equivalent of a folder containing your XAML. I haven’t altered the given structure, and I would’t suggest doing it. The layout subfolder is probably the most important, as this is where the actual view files defining UI components in XML go. In other subfolders, you’ll find places where your icon is defined and where there are centralized definitions for all display strings and attributes. (I haven’t figured out why it’s necessary to have some global cache of strings somewhere. Perhaps this is to take advantage of some kind of localization/globalization paradigm in Android, meaning you don’t have to translate yourself for multi-lingual support. Or maybe I’m just naively optimistic.)

The other thing of interest is the AndroidManifest.xml. This contains some application-wide settings that look important, like your application’s name and whatnot. The only thing that I’ve bothered so far to look at in here is the ability to add an attribute to application called “android:debuggable=”true”. Apparently, that’s needed to test out your deployable on your device. I haven’t actually verified that by getting rid of the attribute, but I seem to recall reading that on the Android Dev forum.

Those are all of the basic components that you’ll be given. The way that Android development goes on the whole is that it is defined in terms of “Activities.” An activity can loosely be thought of as a “screen.” That is, a very basic application will (probably, unless it’s purely a background service) consist of one activity, but a more complex one is going to consist of several and perhaps other application components like services or content providers. Each activity that you define in your application will require a class that extends the “Activity” class and overrides, at least, the “OnCreate(Bundle)” method. This is what you must supply to have a functioning application–at the very least, you must set your activity’s content.

To summarize, what you’re going to need to look at in order to create a hello world type of app on your phone is the Java file you’re given that inherits from activity, the main.xml file in layout, and the manifest. This provides everything you need to build and deploy your app. Now, the interesting question becomes “deploy it to where?”

Deployment – Emulator and Phone

I quickly learned that the device emulator is very, very slow. It takes minutes to load, boot, install your deployable in the virtual environment. Now, don’t get me wrong, the VM is cool, but that’s fairly annoying because we’re not talking about a one-time overhead and quick deployment from there. It’s minutes every time.

Until they optimize that sucker a little, I’d suggest using your phone (or Android tablet, if applicable, but I’m only going to talk about the phone) if it’s even remotely convenient and assuming that you have one. As I discovered, when you run your Eclipse project as an Android app, assuming you’ve set everything up right, the time between clicking “run” and seeing it on your phone is a couple of seconds. This is a huge productivity improvement and I didn’t look back once I started doing this.

Well, let me qualify that slightly. The first time I did it, it was great. The second time I deployed, I got a series of error messages and a pop up asking me to pick which deployment environment I wanted: the emulator or my phone. I wanted my phone, but it was always shown as “offline.” To counter this problem, I discovered it was necessary to go on the device itself and, under “Settings,” set it never to sleep when connected. Apparently, the phone going to sleep sends the ADB driver into quite a tizzy. If you have hit this, just changing the setting on your phone won’t do the trick. You’ll need to go into the platform-tools directory of wherever you installed the Anrdroid SDK and run “adb.exe kill-server” followed by “adb.exe start-server”. For you web devs out there, think of this as clicking on the little tomcat dude with the stop and then the little tomcat dude. 🙂

Now with this set up, you should be able to repeatedly deploy, and it’s really quite impressive how fast this is considering that you’re pushing a package to another device. It’s honestly not noticeably different than building and running a desktop app. The server kill and start trick is useful to remember because there is occasional weirdness with the deployment. I should also mention a couple of other things that didn’t trip me up, but that was because I read about them in advance. To debug on your phone, the phone’s development settings need to be configured for it. In your phone’s settings, under “Applications,” you should check “Allow Installation of Non-Market Applications” and, under “Debugging,” check “USB Debugging”. (On my phone, this is also where you find “Stay Awake,” which caused the problem I mentioned earlier, but YMMV.)

Changing the Icon

One of the first things that you’ll notice is that your “Hello World” or whatever you’re doing deploys as the default little green Anrdoid guy. Personally, when I’m getting acquainted with something new, I like to learn about it by changing the most obvious and visible things, so very quickly I decided to see how changing the icon worked. In your “res” folder, there are three folders: “drawable-hdpi”, “drawable-ldpi”, and “drawable-mdpi”. A little googling showed me that these correspond to high, low, and medium resolution phones. Since Android developers, unlike their iOS counterparts, need to worry about multi-device support, they need to have a vehicle for providing different graphics for different phones.

However, at this point, I wouldn’t (and didn’t) worry about this. I took an image that I wanted to try out as my icon and put it into these folders, overwriting the default. Then, I built and I got some error about the image not being a PNG. Apparently, just renaming a JPG “whatever.png” isn’t enough to trick the SDK, so I opened it in MS Paint and did a “Save As,” selecting file type PNG. This did the trick. As best I can tell, your icon will be capped in size, so it’s better to err on the side of making it slightly too big.

Changing the App Name

When I set all this up last winter, I followed a tutorial that had me build an app called “SayHi”. I was trying to prove the concept of taking an Eclipse project and getting it to run something, anything, on my phone. As such, when I picked it back up and started playing with it, the app was still called “SayHi”. However, I don’t want this app to say hi. It’s actually going to be used to turn lights on and off in my house in conjunction with my home automation. So, I’d like to call it something catchy–something imaginative, you know, like “Light Controller.”

This is actually refreshingly easy for someone who has been working with Visual Studio and Clear Case–a tandem that makes renaming anything about as convenient as a trip to the DMV. Under “res->values,” open the “strings.xml” file. You’ll have tabs at the bottom to view this as raw XML or as a “Resources” file. Either way, the effect is the same. You’ll change the “app_name” string to the value that you want, and that’s it. On the next deployment to your phone, you’ll see your app’s new name. Pretty cool, huh? Two easy changes without any code or having an app that actually does anything, and it at least looks like a real app until you open it.

At this point, I should probably mention something that may not be familiar to you if you’re just getting started. In Eclipse and with the Android SDK, you have various options for how you want to view the XML files. The manifest one seems to have a lot of options. The strings one has the XML versus resource choice. From what I recall, this is a feature of Eclipse in general–I believe plugins can supply their own view of various file extensions. If you want to see what all is theoretically available for any file, XML or not, right click on it and expand “Open With.” That’ll show you all the options. It’s important to remember that even though you may get defaulted to some kind of higher level, GUI-driven editor, you always have the raw text at your disposal. Having said that, however, my experience editing layouts taught me that, for beginners, it’s a lot easier to use the SDK’s layout editor. You’ll save yourself some headaches.

This post has gotten pretty long, so I’ll save my adventures with layouts and GUI components until next post.

By

MVVM and Dialogs

For those familiar with the MVVM (Model, View, View-Model) pattern in .NET development, one conundrum that you’ve probably pondered, or at least read about, is what to do about showing a dialog. For a bit of background, MVVM is centered around the concept of binding from the View (XAML markup) to the “ViewModel”, which essentially acts as a staging platform for UI binding.

The ViewModel exposes the “Model” in such a way that the XAML can passively bind to it. It does this by exposing bindable properties (using INotifyPropertyChanged) and bindable commands (by using ICommand). Properties represent data, and commands represent actions.

The Problem

So, let’s say that you want a clean MVVM implementation which generally aspires to have no code-behind. Some people are more purist about this than others, but the concept has merit. Code-behind represents an active way of binding. That is, you have code that knows about the declarative markup and manipulates it. The problem here is that you have a dependency bugaboo. In a layered application, the layers should know about the one (or ones) underneath them and care nothing about the ones above them. This allows a different presentation layer to be plopped on a service tier or a different view to be skinned on a presentation tier. In the case of code-behind, what you have is a presentation tier that knows about its view and a view that knows about its presentation tier. You cannot simply skin another view on top because the presentation tier (read: code-behind) expects named elements in the declarative markup.

So, in a quest to eliminate all things code behind, you adopt MVVM and do fine when it comes to data binding and basic commands. But inevitably you want to open a window, and the WPF framework is extremely clunky and win-forms-like when it comes to this. Your choices, out of the box, are to have a named element in the XAML and manipulate it to show a dialog or else to have an event handler in the code behind.

What Others Have Suggested

The following are suggestions I’ve seen to address this problem and the reasons that I didn’t particularly care for them, in regards to my own situation. I did a fair amount of research before rolling my own.

  1. Just use the code behind (second response to post (3), though I’ve seen the sentiment elsewhere). I don’t really like this because I think that, when it comes to design guidelines, slippery slopes are a problem. If you’re creating a design where you’d like to be able to arbitrarily swap out groups of XAML files above a presentation layer, making this exception is the gateway to your skinnable plans going out the window. Why make exceptions to your guidelines if it isn’t necessary?
  2. Mediator Pattern. Well, this particular implementation lost me at “singleton,” but I’m not really a fan of this pattern in general for opening windows. The idea behind all of these is to create a situation where the View and ViewModel communicate through a mediator so as to have no direct dependencies. That is, ViewModel doesn’t depend on View–it depends on Mediator, as does the View. Generally speaking, this sort of mediation is effective at allowing tests and promoting some degree of flexibility, but you still have the same dependency in concept, and then you have the mediator code to maintain and manage.
  3. Behaviors. This is a solution I haven’t looked at too much and might come around to liking. However, at first blush, I didn’t like the looks of that extra XAML and the overriding of the Behavior class. I’m generally leery of .NET events and try to avoid them as much as possible. (I may create a post on that in and of itself, but suffice it to say I think the syntax and the forced, weakly typed parameters leave a lot to be desired.)
  4. Some kind of toolkit Blech. Nothing against the people that make these, and this one looks pretty good and somewhat in line with my eventual situation, but it seems like complete overkill to download, install, and maintain some third party application to open a window.
  5. IOC Container. I’ve seen some of these advertised, but the same principle applies here as the last one. It’s overkill for what I want to do.

I’ve seen plenty of other solutions and discussion as well, but none of them really appealed to me.

What I Did

I’ll just put the code and example usage in XAML here and talk about it:

public class OpenWindowCommand : SimpleCommand where T : Window, new()
{
    #region Fields

    /// Stores the function that evaluates whether or not the command can be executed
    private readonly Func _canExecute;

    /// View model that will serve as data context of the command in question
    private readonly IViewModel _viewModel;

    /// Used to verify method preconditions and object invariants
    private readonly ArgumentValidator _validator = new ArgumentValidator();
                                                                               

    #endregion

    #region Constructor

    /// For window open command, 
    /// 
    public OpenWindowCommand(IViewModel viewModel, Func canExecute = null) : base(canExecute, null)
    {
        _validator.VerifyNonNull(viewModel);

        _viewModel = viewModel;
    }

    #endregion

    #region ICommand stuff

    /// Ignores the command parameter, creates the window, sets its datacontext, and shows
    public override void Execute(object parameter)
    {
        var myWindow = new T();
        myWindow.DataContext = _viewModel;
        myWindow.ShowDialog();
    }

    #endregion
}

    

That’s it. The things that are referenced here that you won’t have are worth mentioning but not vital to the implementation. SimpleCommand, from which “OpenWindowCommand” inherits, is a class that allows easier command declaration and use. It implements ICommand. It takes a delegate or a boolean for CanExecute() and a delegate for execution (that we override in OpenWindowCommand since we have a concrete implementation). Simple command is not generic–the generic is in OpenWindowCommand to allow strongly typed window opening (the presumption here being that you want to use this for windows that you’ve created and want to show modally).

The binding in the XAML to commands is to an object that represents a collection of commands. I actually have a CommandCollection object that I’ve created and exposed as a property on the ViewModel for that XAML, but you could use a Dictionary to achieve the same thing. Basically, “Commands[]” is just an indexed hash of commands for brevity in the view model. You could bind to a OpenWindowCommand property on your ViewModel.

So, basically, when the view model from which you want to open a window is being setup, you create an instance of OpenWindowCommand(YourViewModelInstance). When you do this, you passively expose a window open for binding. You’re saying to the view “execute this command to open window of type X with view model Y for backing.” Your view users are then free to bind to this command or not.

Why I Like This

First of all, this implementation creates no code-behind. No named windows/dialogs certainly, but also no event handlers. I also like that this doesn’t have the attendant complexity of most of the other solutions that I’ve seen. There’s no IMediator/Mediator, there’s no ServiceLocator, no CommandManager.Instance–none of it. Just one small class that implements one framework interface.

Naturally, I like this because this keeps the ViewModel/presentation layer View agnostic. This isn’t true out of the box here, but it is true in my implementation. I don’t declare commands anywhere in my ViewModels (they’re all wired in configurably by my home-rolled IOC implementation at startup). So the ViewModel layer only knows about the generic Window, not what windows I have in my view.

Room for Improvement

I think it would be better if the presentation tier, in theory, didn’t actually know about Window at all. I’m keeping my eyes peeled for a way to remove the generic parameter from the class and stick it on the Execute() method to be invoked from XAML. XAML appears to be very finicky when it comes to generics, but I have some hope that this may be doable in the latest release. I’ll re-post when I find that, because I’d love to have a situation in which the XAML itself could specify what kind of window to open as a command parameter. (I’m not in love with command parameters, but I’d make an exception for this flexibility.)

I’m also aware that this doesn’t address non-modal windows, and that there is currently no mechanism for obtaining the result from ShowDialog. The former I will address as I grow the application that I’m working on. I already have a solution for the latter in my code, and perhaps I’ll detail that more in a subsequent post.

By

Testable Code is Better Code

It seems pretty well accepted these days that unit testing is preferable to not unit testing. Logically, this implies that most people believe a tested code base is better than a non-tested code base. Further, by the nature of testing, a tested code base is likely to have fewer bugs than a non-tested code base. But I’d like to go a step further and make the case that, even given the same amount of bugs and discounting the judgment as to whether it is better to test or not, unit-tested code is generally better code, in terms of design and maintainability, than non-unit-tested code.

More succinctly, I believe that unit testing one’s code results in not just fewer bugs but in better code. I’ll go through some of the reasons that I believe that, and none of those reasons are “you work out more bugs when you test.”

It Forces Reasoning About Code

Let’s say that I start writing a class and I get as far as the following:

public class Customer
{
    public int Id { get; set; }

    public string LastName { get; set; }

    public string FirstName { get; set; }
}

Pretty simple, but there are a variety of things right off the bat that can be tested. Can you think of any? If you don’t write a lot of tests, maybe not. But what you’ve got here is already a testing gold mine, and you have the opportunity to get off to a good start. What does Id initialize to? What do you want it to initialize to? How about first and last name? Already, you have at least three tests that you can write, and, if you favor TDD and don’t want nulls or zeroes, you can start with failing tests and make them pass.

It Teaches Developers about the Language

A related point is that writing unit tests tends to foster an understanding of how the language, libraries, and frameworks in play work. Consider our previous example. A developer may go through his programming life in C# not knowing what strings initialize to by default. This isn’t particularly far-fetched. Let’s say that he develops for a company with a coding standard of always initializing strings explicitly. Why would he ever know what strings are by default?

If, on the other hand, he’s in the practice of immediately writing unit tests on classes and then getting them to pass, he’ll see and be exposed to the failing condition. The unit test result will say something like “Expected: String.Empty, Was: null”.

And that just covers our trivial example. The unit tests provide a very natural forum for answering idle questions like “I wonder how x works…” or “I wonder what would happen if I did y…” If you’re working on a large application where build time is significant and getting to a point in the application where you can verify an experiment is non-trivial, most likely you leave these questions unanswered. It’s too much of a hassle, then the alternative, creating a dummy solution to test it out, may be no less of a hassle. But, sticking an extra assert in an existing unit test is easy and fast.

Unit Tests Keep Methods and Classes Succinct and Focussed

public void ChangeUiCar()
{
    try
    {
        Mouse.OverrideCursor = Cursors.Wait;
        MenuItem source = e.OriginalSource as MenuItem;

        if (source == null) { return; }

        ListBox ancestor = source.Tag as ListBox;

        if (ancestor == null) return;

        CarType newType = (CarType)Enum.Parse(typeof(CarType), ancestor.Tag.ToString());

        var myOldCar = UIGlobals.Instance.GetCurrentCar();
        var myNewCar = UIGlobals.Instance.GetNewCar(newType);

        if (myNewCard.Manufacturer == "Toyota" || myNewCar.Manufactuer == "Hyundai" || myNewCar.Manufacturer == "Fiat")
        {
            myNewCar.IsForeign = true;
        }
        else if (myNewCar.Manufactuer == "Ford" ||
            (myNewCar.Manufacturer == "Jeep" && myNewCar.WasMadeInAmerica)
            || (myNewCar.Manfacturer == "Chrysler" && myNewCar.IsOld))
        {
            myNewCar.IsForeign = false;
        }

        try
        {
            UpdateUiDisplay(myNewCar, true, false, 12, "dummy text");
        }
        catch
        {
            RevertUiDisplay(myOldCar, true, false, 0, "dummy text");
        }

        if (myNewCar.HasSunRoof || myNewCar.HasMoonRoof || myNewCar.HasLeatherSeating || myNewCar.HasGps ||
            myNewCar.HasCustomRims || myNewCar.HasONBoardComputer)
        {
            bool isLuxury = CarGlobals.Instance.DetermineLuxury(myNewCar);
            if (isLuxury)
            {
                if (myNewCar.IsForeign && myNewCar.IsManualTransmission)
                {
                    myNewCar.DisplaySportsCarImage = true;
                }
                if (myNewCar.IsForeign)
                {
                    myNewCar.DisplayAmericanFlag = false;
                    if (myNewCar.HasSpecialCharacters)
                    {
                        myNewCar.DisplaySpecialCharacters = UIGlobals.Instance.CanDisplayHandleSpecialCharacters();
                        if (myNewCar.DisplaySpecialCharacters)
                        {
                            UpdateSpecialCharacters(myNewCar);
                        }
                    }
                    else
                    {
                        UIGlobals.Instance.SpecialCharsFlag = "Off";
                    }
                }
            }
        }

    }
    finally
    {
        Mouse.OverrideCursor = null;
    }
}

This is an example of a method you would never see in an actively unit-tested code base. What does this method do, exactly? Who knows… probably not you, and most likely not the person or people that ‘wrote’ (cobbled together over time) it. (Full disclosure–I just made this up to illustrate a point.)

We’ve all seen methods like this. Cyclomatic complexity off the charts, calls to global state sprinkled in, mixed concerns, etc. You can look at it without knowing the most common path through the code, the expected path through the code, or even whether or not all paths are reachable. Unit testing is all about finding paths through a method and seeing what is true after (and sometimes during) execution. Good luck here figuring out what should be true. It all depends on what global state returns, and, even if you somehow mock the global state, you still have to reverse engineer what needs to be true to proceed through the method.

If this method had been unit-tested from its initial conception, I contend that it would never look anything like this. The reasoning is simple. Once a series of tests on the method become part of the test suite, adding conditionals and one-offs will break those tests. Therefore, the path of least resistance for the new requirements becomes creating a new method or class that can, itself, be tested. Without the tests, the path of least resistance is often handling unique cases inline–a shortsighted practice that leads to the kind of code above.

Unit Tests Encourage Inversion of Control

In a previous post, I talked about reasoning about code in two ways: (1) command and control and (2) building and assembling. Most people have an easier time with and will come to prefer command and control, left to their own devices. That is, in my main method, I want to create a couple of objects and I want those objects to create their dependencies and those dependencies to create their dependencies and so on. Like the CEO of the company, I want to give a few orders to a few important people and have all of the hierarchical stuff taken care of to conform to my vision. That leads to code like this:

class Engine
{
    Battery _battery = new Battery();
    Alternator _alternator = new Alternator();
    Transmission _transmission = new Transmission();
}

class Car
{
    Engine _engine = new Engine();
    Cabin _cabin = new Cabin();
    List _tires = new List() { new Tire(), new Tire(), new Tire(), new Tire() };

    Car()
    {

    }

    void Start()
    {
        _engine.Start();
    }
}

So, in command and control style, I just tell my classes that I want a car, and my wish is their command. I don’t worry about what engine I want or what transmission I want or anything. Those details are taken care of for me. But I also don’t have a choice. I have to take what I’m given.

Since my linked post addresses the disadvantages of this approach, I won’t rehash it here. Let’s assume, for argument’s sake, that dependency inversion is preferable. Unit testing pushes you toward dependency inversion.

The reason for that is well illustrated by thinking about testing Car’s “start” method. How would we test this? Well, we wouldn’t. There’s only one line in the method and it references something completely hidden from us. But, if we changed Car and had it receive an engine through its constructor, we could easily create a friendly/mock engine and then make assertions about it after Car’s start method was called. For example, maybe Engine has an “IsStarted” property. Then, if we inject Engine to Car, we have the following simple test:

var myEngine = new Engine();
var myCar = new Car(myEngine);
myCar.Start();

Assert.IsTrue(myEngine.IsStarted);

After you spend some time unit testing regularly, you’ll find that you come to look at the new keyword with suspicion that you never did before. As I write code, if I find myself typing it, I think “either this is a data transfer object or else there better be a darned good reason for having this in my class.”

Dependency-inverted code is better code. I can’t say it any plainer. When your code is inverted, it becomes easier to maintain and requirements changes can be absorbed. If Car takes an Engine instead of making one, I can later create an inheritor from Engine when my requirements change and just give that to Car. That’s a code change of one modified line and a new class. If Car creates its own Engine, I have to modify Car any time something about Engine needs to change.

Unit Testing Encourages Use of Interfaces

By their nature, interfaces tend to be easier to mock than simple instances–even virtual ones. While I can’t speak to every mocking framework out there, it does seem to be a rule that the easiest way to mock things is using interfaces. So when you’re testing your code, you’ll tend to favor interfaces when all things are equal, since that will make test writing easier.

I believe that this favoring of interfaces is helpful for the quality of code as well. Interfaces promote looser coupling than any other way of maintaining relationships between objects. Depending on an interface instead of a concrete implementation allows decoupling of the “what” from the “how” question when programming. Going back to the engine/car example, if I have a Car class that depends on an Engine, I am tied to the Engine class. It can be sub-classed, but nevertheless, I’m tied to it. If its start method cannot be overridden and throws exceptions, I have to handle them in my Car’s start method.

On the other hand, depending on an engine interface decouples me from the engine implementation. Instead of saying, “alright, specific engine, start yourself and I’ll handle anything that goes wrong,” I’m saying, “alright, nameless engine, start yourself however it is you do that.” I don’t necessarily need to handle exceptions unless the interface contract allows them. That is, if the interface contract stipulates that IEngine’s start method should not throw exceptions, those exceptions become Engine’s responsibility and not mine.

Generally speaking, depending on interfaces is very helpful in that it allows you to make changes to existing code bases more easily. You’ll come to favor addressing requirements changes by creating new interface implementations rather than by going through and modifying existing implementations to handle different cases.

Regularly Unit Testing Makes You Proactive Instead of Reactive About the Unexpected

If you spend a few months unit testing religiously, you’ll find that a curious thing starts to happen. You’ll start to look at code differently. You’ll start to look at x.y() and know that, if there is no null check for x prior to that call, an exception will be thrown. You’ll start to look at if(x < 6) and know that you’re interested in seeing what happens when x is 5 and x is 6. You’ll start to look at a method with parameters and reason about how you would handle a null parameter if it were passed in, based on the situation. These are all examples of what I call “proactive,” for lack of a better term. The reactive programmer wouldn’t consider any of these things until they showed up as the cause of a defect.

This doesn’t happen magically. The thing about unit tests that is so powerful here is that the mistakes you make while writing the tests often lead you to these corner cases. Perhaps when writing the test, you pass in “null” as a parameter because you haven’t yet figured out what, exactly, you want to pass in. You forget about that test, move on to other things, and then later run all of your tests. When that one fails, you come back to it and realize that when null is passed into your method, you dereference it and generate an unhandled exception.

As this goes on over the course of time, you start to recognize code that looks like it would be fragile in the face of accidental invocations or deliberate unit test exercise. The unit tests become more about documenting your requirement and guarding against regression because you find that you start to be able to tell, by sight, when code is brittle.

This is true of unit tests because the feedback loop is so tight and frequent. If you’re writing some class without unit tests, you may never actually use your own class. You write the class according to what someone writing another class is going to pass you. You both check in your code, never looking at what happens when either of you deviate from the expected communication. Then, three months later, someone comes along and uses your class in another context and delivers his code. Another three months after that, a defect report lands on your plate, you fire up your debugger, and figure out that you’re not handling null.

And, while some learning will occur in this context, it will be muted. You’re six months removed from writing that code. So while you learn in principle that null parameters should be handled, you aren’t getting feedback. It’s essentially the difference between a dieter having someone slap his hand when he reaches for a cookie, or weigh him six months later and tell him that he shouldn’t have eaten that cookie six months ago. One is likely to change habits while the other is likely to result in a sigh and a “yeah, but are ya gonna do?”

Conclusion

I can probably think of other examples as well, but this post is already fairly long. I sincerely believe that the simple act of writing tests and getting immediate feedback on one’s code makes that person a better programmer more quickly than ignoring the tests. And, if you have a department where your testers are all writing tests, they’re becoming better designers/programmers and adopting good practices while doing productive work and raising the confidence level in the software that they’re producing.

I really cannot fathom any actual disadvantage to this practice. To me, the “obviously” factor of this is now on par with whether or not wearing a seat belt is a good idea.

By

A Small, Functional Makefile

I don’t write C++ all that often these days, but I suppose that I spent so many years with it that it never really feels foreign to me when I come back to it. What does oftentimes feel foreign is generating a Makefile when developing in Linux. I’ve made enough of them over the years that I know what I’m doing but not so many that I can go off the cuff after six months or a year break from them.

So I’m sticking a sample Makefile here. This is somewhat for me to refer back to whenever I need to, but I’ll also explain some of the basics. In this example, I’m creating a little C++ application for calculating the odds of poker hands, given what is on the table at the moment. At the time of writing, the example, in its infancy, has only one class: Card. So the files at play here are card.h, card,cc and main.cpp. The main class references card.cpp, which, in turn, references its class definition header file, card.h.

all: oddscalc

card.o: card.cpp

	g++ -Wall -c -o card.o card.cpp

main.o: main.cpp
	g++ -Wall -c -o main.o main.cpp

oddscalc: card.o main.o

	g++ card.o main.o -o oddscalc

clean: 
	rm -f *.o oddscalc

So there’s the simple Makefile. If you take a look at this, the main purpose of the Makefile is, obviously, to compile the source, but also to automate linking so that, as projects grow, you don’t have increasingly unwieldy g++ command line statements. So we define a few Makefile rules. First, card.o is generated by compiling card.cc. Second, main.o is generated by compiling main.cpp. The executable is generated by linking the two object files, and “all” is the executable.

That’s all well and good, but I can eliminate some duplication and make this more configurable. I’ll use Makefile variables so that I don’t have to repeat things like “card,” “oddscalc,” and “g++” everywhere.

In addition, I can see the inevitable redundancy coming from our previous Makefile. As soon as I add hand.cpp/hand.h and deck.cpp/deck.h, I’m going to have to create rules for them as well. Well, I don’t want to do that, so I’m introducing a scheme that, in essence, says, “compile every .cpp file I give you into a .o file and link it in the main assembly.” This will be expressed with a “.cpp.o” rule.

#Defines
CC=g++
CFLAGS=-c -Wall
EXE=oddscalc
SOURCES=main.cpp card.cpp
OBJECTS=$(SOURCES:.cpp=.o)

#Build Rules

all: $(SOURCES) $(EXE)

.cpp.o:
	$(CC) $(CFLAGS) $< -o $@

$(EXE): $(OBJECTS)

	$(CC) $(OBJECTS) -o $(EXE)

clean: 
	rm -f *.o oddscalc

With this Makefile, if I want to add a new class, all I need to do is add the class's .cpp file to the "SOURCES" definition line and it will get compiled and linked for the application. (Well, obviously, I need to write the class as well, but we're just talking about the Makefile here.)

So that's it. There are a lot of things you can do with Makefiles. Some people create a variety of build configurations. "make tar" is a popular option as well. But I think that this Makefile is relatively simple and elegant, and it's easy to add to.

By

DXCore Plugin Part 3

In a previous post, I mentioned my ongoing progress with a DX Core plugin. I’d been using it successfully for some time now but had to dust it off again last week. Another convention that I’m required to abide for consistency sake but don’t much care for is explicit typing of my variables.

That is, I prefer declarations like:

var myTypeWithLongName = new TypeWithALongName();

rather than:

TypeWithALongName myTypeWithLongName = new TypeWithALongName();

I personally find the second to be needlessly syntactically noisy, and I don’t really see any benefit since the implicit (var) typing preserves strong typing and even causes the resulting type to show up in Intellisense. But, when in Rome…

Compared to my previous setup, this was relatively straightforward. Initially, I thought it would be even simpler than it turned out to be, since Code Rush itself supports the explicit/implicit conversion as an individual refactoring. I figured I’d just have to iterate through the file and call something like “Some.CodeRush.Namespace.MakeExplicit(myVariable)”.

Well, it didn’t turn out to be that easy (at least not that I found), but it did turn out to be relatively simple. Eliding the part about actually finding the variables, I was able to accomplish what I wanted with this method:

/// Convert the local variable declaration to FSG's standard of explicit typing
/// Variable object to convert
/// A string with the new type declaration
public string ConvertDeclaration(Variable variable)
{
    var myImplicit = variable as ImplicitVariable;
    string myString = variable.MemberType;
    if (myImplicit != null)
    {
        var myObjectCreation = myImplicit.Expression as ObjectCreationExpression;
        if (myObjectCreation != null && myObjectCreation.ObjectType != null)
        {
            myString = myObjectCreation.ObjectType.ToString();
        }

        var myArrayCreate = myImplicit.Expression as ArrayCreateExpression;
        if (myArrayCreate != null)
        {
            myString = myArrayCreate.BaseType.ToString();
        }
    }
    return myString;
}

This isn’t actully my finished ‘production’ code, but it’s easier to display in this format. I didn’t like the size of that method, so I created an abstract ExplicitExpressionConverter and gave it inheritors ObjectCreationExpressionConverter and ArrayCreateExpressionConverter, using a factory method creation pattern to figure out which to create, based on the variable. But while that abstraction makes the code cleaner, it makes the logic harder to document in a blog post.

So, anyway, the idea here is that you need to convert the variable you find into the appropriate type of expression. An object creation expression is standard allocation, and the array create expression is a special case. There are other special cases as well, such as null coalescing (var myFoo = someClass.GetAFoo() ?? new Foo()) and method call expression (var myFoo = someClass.GetAFoo()), but I have not yet figured out how to obtain the return type from those expressions. I’ll probably add a part 4 if/when I do.