DaedTech

Stories about Software

By

SVN Global Ignore

This morning, I was adding a playpen I’d created of an enormous project to subversion. In my playpen, I was only editing a handful of .cs files, but I wanted to add all .cs files to my little repository. What I didn’t want to add were, obviously, outputs of the project, but other stuff that is sourced controlled in that project, like various media files and other space-suckers.

My first thought was to impose ignores on the individual directories, manually adding things to the ignore list, but that’s incredibly cumbersome in a project of this size. At this point, I remembered the SVN global ignore settings where I can have the shell simply omit stuff in my add window and show it as ignored in explorer windows (MS Windows, here, obviously).

To access it, rick click anywhere and go to TortoiseSVN->Settings. You’ll see the following, and circled is what you want to edit:

You can put extensions here with wildcards and you can, as I did, put the names of directories in here (such as bin, debug, release, etc). Once you put these in, they will be ignored by default and they will not appear in the “add” window of tortoise. You can still add them — this does not supersede actual source control. You just have to do it manually.

By

Forget Design Documents

Waterfalls Take Time

I sat in on a meeting the other day, and heard some discussion about late breaking requirements and whether or not these requirements should be addressed in a particular release of software. One of discussion participants voiced an opinion in the negative, citing as rationale that there was not sufficient time to generate a document with all of these requirements, generate a design document to discuss how these would be implemented, and then write the code according to this document.

This filled me with a strange wistfulness. I’m actually not kidding – I felt sad in a detached way that I can’t precisely explain.

The closest that I can come is to translate this normal-seeming statement of process into real work terms. The problem and reason that a change couldn’t be absorbed was because there was no time to take the requirements, transpose them to a new document, write another document detailing exactly what to do, execute the instructions in the second document, and continually update the second document when it turned out not to be prescient.

Or, more concisely, a lot of time was required to restate the problem, take a guess at how one would solve it, and then continually and exhaustively refine that guess.

I think the source of my mild melancholy was the sense that, not only was this wasteful, but it also kind of sucks the marrow out of life (and not in the good, poetic way). If we applied this approach to cooking breakfast, it would be like taking a bunch of fresh ingredients out of the fridge, and then getting in the car with them and going to the grocery store to use them as a shopping list for buying duplicate ingredients. Then, with all the shopping prepared, it’s time to go home, get out the pots and pans, roll up your sleeves and… leave the kitchen to sit at your desk for an hour or two slaving over a written recipe.

Once the recipe is deemed satisfactory in principle by your house guests, cooking would commence. This would involve flour, herbs spices, eggs, and of course — lots more writing. If you accidentally used a pinch instead of a dash, it’d be time to update the recipe as you cooked.

Kinda sucks the life right out of cooking, and it’s hard to convince yourself that things wouldn’t have gone better had you just cooked breakfast and jotted down the recipe if you liked it.

Why Do We Do This?

If you’ve never read Samuel Taylor Coleridge’s, “The Rime of the Ancient Mariner”, you’ve missed out on a rather depressing story and the origin of the idea of an “albatross around your neck” as something that you get saddled with that bogs you down.

Long story short, during the course of this poem, the main character kills an albatross that he superstitiously believes to be holding him back, only to have the thing forced around his neck as both a branding and a punishment for angering spiritual entities with some clout. During the course of time wearing the albatross, all manner of misfortune befalls the mariner.

In the world of software development, (and ignoring the portion of the story where the mariner shoots the albatross), some of our documentation seems to become our albatross. These documents that once seemed our friends are very much our curse — particularly the design document.

No longer do they help us clarify our vision of the software as it is supposed to look, but rather they serve as obligatory bottlenecks, causing us frequent rework and cruelly distracting us by whispering in our ears echoing chants of “CMMI” or “ISO”. Getting rid of them by simply ceasing to do them seems simple to the outsider, but is thoroughly impossible for us software mariners. We soldier on in our doomed existence, listlessly writing and doing our best to keep these documents up to date.

Let’s Just Not Do It

Things need not be this bleak. Here’s a radical idea regarding the design document — just don’t make one. Now, I know what you’re thinking — how are you going to have any idea what to do if you don’t lay it all out in a blueprint? Well, simple. You’ll use a combination of up-front whiteboard sketching and test driven development. These two items will result in both better documentation, and far less wasted time.

President Eisenhower once said, “Plans are nothing, planning is everything.” This philosophy applies incredibly well to software design.

Capturing your design intentions at every step of the way is truly meaningless. Source control captures the actual state of the code, so who cares what the intentions were at a given moment? These ephemeral notions of intent are best written and left on whiteboards in the offices of developers to help each other collaborate and understand the system.

Does the end-user care about this in the slightest? Of course not! Does anyone at all care about these once they become outdated? Nope. So, why capture and update them as you go?

Capture reality when you’re done, and only as much as-is required by a given stakeholder. The more you document superfluously, the more documents you have that can get out of date. And, when your docs get out of date, you have a situation worse than no documentation — lying documentation.

The other main component of this approach is test driven development. Test driven development forces the creation and continuous updating of arguably the best form of documentation there is for fellow developers: examples. As Uncle Bob points out in the three rules of TDD, the target audience is developers, and given the choice between paragraphs/diagrams and example code, developers will almost without exception choose example code. And, example code/API is exactly what the by-product of TDD is. It’s a set of examples guaranteed to be accurate and current.

And, what’s more, TDD doesn’t just provide unit tests and an accurate sample of how to use the software — it drives good design. A TDD code base tends to be modular, decoupled and flexible simply by the emergent design through the process. This means that TDD as you go is likely to provide you with every bit as good a design as you could hope to have by rubbing your crystal ball and pretending to know exactly what 200 classes are going to be necessary up front.

Is it Really that Simple?

Well, maybe, maybe not. The solution can certainly be just that simple – ceasing to do something you’re in the habit of doing is excellent at forcing change. However, the whiteboard culture and the art of TDD certainly require some conscious practice.

So, you may want to take both for a test drive before simply scrapping your diligent design documentation. But, personally, I’d say “jump in – the water’s fine!” I don’t think I’ve ever regretted not bothering with an ‘official’ design document. I’m happy to draw up documents to communicate the function of system at a given moment (instruction manuals, package diagrams, quick overviews, whatever), but creating them to be predictive of functionality and updating them from inception to the end of time… no thank you.

The documents that we create are designed to promote understanding and help communication — not to serve as time-sucking millstones around our necks that we cite as reasons for not providing functionality to users.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

10 Ways to Improve Your Code Reviews

For a change of pace, I thought I’d channel a bit of Cosmo and offer a numbered article today. I’ve asked by others to do a lot of code reviews lately, and while doing this, I’ve made some notes as to what works, what doesn’t, and how I can improve. Here are those notes, enumerated and distilled into a blog post.

  1. Divide and distribute. Have one person look for duplicate code chunks, one look for anti-patterns, one check for presence of best practices, etc. It is far easier to look through code for one specific consideration than it is to read through each method or line of code, looking for anything that could conceivably be wrong. This also allows some people to focus on coarser granularity concerns (modules and libraries) with other focused on finer (classes and methods). Reading method by method sometimes obscures the bigger picture and concentrating on the bigger picture glosses over details.
  2. Don’t check for capitalization, naming conventions and other minutiae. I say this not because I don’t care about coding conventions (which I kind of don’t), but because this is a waste of time. Static analysis tools can do this. Your build can be configured not to accept checkins/deliveries that violate the rules. This is a perfect candidate for automation, so automate it. You wouldn’t waste time combing a document for spelling mistakes when you could turn on spell-check, and the same principle applies here.
  3. Offer positive feedback. If the code review process is one where a developer submits code and then defends it while others try to rip it to pieces, things become decidedly adversarial, potentially causing resentment. But, even more insidiously, unmitigated negativity will tend to foster learned helplessness and/or get people to avoid code reviews as much as possible.
  4. Pair. If you don’t do it, start doing it from time to time. If you do it, do it more. Pairing is the ultimate in code review. If developers spend more time pairing and catching each other’s mistakes early, code reviews after the fact become quicker and less involved.
  5. Ask why, don’t tell what. Let’s say that someone gets a reference parameter in a method and doesn’t check it for null before dereferencing it. Clearly, this is bad practice. But, instead of saying “put a null check there”, ask, “how come you decided not to check for null — is it safe to assume that you callers never pass you null?” Obviously, the answer to that is no. And, the thing is, almost any programmer will realize this at that point and probably say “oh, no, that was a mistake.” The key difference here is that the reviewee is figuring things out on his or her own, which is clearly preferable to being given rote instruction.
  6. Limit the time spent in a single code review. Given that this activity requires collaboration and sometimes passive attention, attention spans will tend to wane. This, in turn, produces diminishing marginal returns in terms of effectiveness of the review. This isn’t rocket science, but it’s important to keep in mind. Short, focused code reviews will produce effective results. Long code reviews will tend to result in glossing over material and spacing out, at which point you might as well adjourn and do something actually worthwhile.
  7. Have someone review the code simply for first-glance readability/understanding. There is valuable information that can be mined from the reaction of an experienced programmer to new code, sight unseen. Specifically, if the reaction to some piece of the code is “what the…”, that’s a good sign that there are readability/clarity issues. The “initial impression” litmus test is lost once everyone has studied the code, so having someone capture that at some point is helpful.
  8. Don’t guess and don’t assume — instead, prove. Rather than saying things like “I think this could be null here” or “This seems like a bad idea”, prove those things. Unit tests are great. If you see a flaw in someone’s code, expose it with a failing unit test and task them with making it pass. If you think there’s a performance problem or design issue, support your contention with a sample program, blog post, or whitepaper. Opinions are cheap, but support is priceless. This also has the added benefit of removing any feelings of being subject to someone else’s whims or misconceptions.
  9. Be prepared. If this is a larger, meeting-oriented code review, the people conducting the review should have read at least some of the code beforehand, and should be familiar with the general design (assuming that someone has already performed and recorded the results from suggestion 7). This allows the meeting to run more efficiently and the feedback to be more meaningful than a situation where everyone is reading code for the first time. When this happens, things will get missed since people start to feel uncomfortable as everyone waits for them to understand.
  10. Be polite and respectful.    You would think that this goes without saying, but sadly, that seems not to be the case.  In my career, I have encountered many upbeat, intelligent and helpful people, but I’ve also encountered plenty of people who seem to react to everything with scorn, derision, or anger.  If you know more than other people, help them.  If they’re making basic mistakes, help them understand.  If they’re making the same mistakes multiple times, help them find a way to remember.  Sighing audibly, rolling your eyes, belittling people, etc, are not helpful.  It’s bad for them, and it’s bad for you.  So please, don’t do that.

Feel free to chime in with additional tips, agreements, disagreements, etc.

By

Setting Up AJAX and JQuery

AjaxSo, in response to feedback from my previous post about my   home automation server site, I’ve decided to incorporate AJAX and JQuery into my solution. This is partially because it’s The Right Thing ™ and partially because I’m a sucker for the proposition of learning a new technology. So, here are the steps I took toward going from my previous solution to a working one using AJAX, including downloads and other meta-tasks.

The first thing that I did was poke around until I found this tutorial, which is close enough to what I to do to serve as a blueprint. I found it very clear and helpful, but I realized that I had some legwork to do. I setup my java code as per the tutorial, but on the client side in JSP, I realized things wouldn’t work since I couldn’t very well source a jquery library that didn’t exist in the workspace. I poked around on my computer a bit and saw that I did have various jquery.js libraries, but they were part of Visual Studio, Android, and other concerns, so I figured I’d download one specifically for this task rather than try to reappropriate.

So, I went to jquery.com. I poked around a bit until I found the most recent “minified” version, since that’s what the example was using, and discovered it here. I was a little surprised to find that the ‘download’ would consist of copying it and pasting it into a local text file that I named myself, but I guess, what did I expect – this is a scripted language executed by browsers, not a C++ compiler or the .NET framework or something.

In Eclipse, I made a directory under my WebContent folder called “js”, and I put my new jquery-1.7.1.min.js file in it. Now, I had something to link to in my JSP page. Here is the actual link that I put in:


Just to make sure my incremental progress was good, I built and ran on localhost, and

My project now error’d on build and at runtime. For some reason, Eclipse seems not to like the minified version, so I switched to using the non minified. I still got a runtime error in Eclipse browser (though not in Chrome) and the javascript file had warnings in it instead of errors. This was rectified by following the high scoring (though strangely not accepted) answer on this stack overflow post.

However, it was at this point that I started to question how much of this I actually needed. I don’t particularly understand AJAX and JQuery, but I’m under the impression that JQuery is essentially a library that simplifies AJAX and perhaps some other things. The tutorial that I was looking at was describing how to send POST variables and get a response, and how this was easier with JQuery. But I actually don’t need the variables, nor do I need a response at this time. So, given the JQuery runtime errors that were continuing to annoy me, I deleted JQuery from the proiejct and resolved to work this out at a later date. From here, after a bit of poking around, I realized that using AJAX from within Javascript was, evidently, pretty simple. I just needed to instantiate an XMLHttpRequest object and call some methods on it. Here is what I changed my kitchen.jsp page to be:



Overhead OnOverhead Off

Pretty simple, though when you have no idea what you’re doing, it takes a while to figure out. 🙂

I instantiate a request, populate it and send it. Given my RESTful scheme, all the info the server needs is contained in the path of the request itself, so it isn’t necessary for me to parse the page or ask for a response. I added the javascript:void(0) calls so that the buttons would still behave like actual, live links. I think that later, when it is, I’ll probably bring JQuery back and revisit the tutorial that I found. Here is my updated controller class.

@Controller
@RequestMapping("/kitchen")
public class KitchenController {




	@RequestMapping("/kitchen")
    public ModelAndView kitchen() {
    	String message = "Welcome to Daedalus";
        return new ModelAndView("kitchen", "message", message);
    }
	
	@RequestMapping(value="/{command}", method = RequestMethod.POST)
	public @ResponseBody String kitchen(@PathVariable String command) {
		
		try {
			Runtime.getRuntime().exec("/usr/local/heyu-2.6.0/heyu " + command + " A2");
		} catch (IOException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
		
		return "";
	}
}

I’m fairly pleased with the asynchronous setup and the fact that I’m not playing weird games with redirect anymore. I have a unit test project setup, so I think I’m now going to get back to my preference of TDD, and factor the controller to handle more arbitrary requests, making use of an interfaced service for the lights and a data driven model for the different rooms and appliances. I’ve got my eye on MongoDB as a candidate for data storage (I’m not really doing anything relational), but if anyone has better ideas, please let me know.

By

The Outhouse Anti Pattern

Source Control Smells?

I was reading through some code the other day, and I came across a class that made me shutter a little. Interestingly, it wasn’t a code smell or a name smell. I wasn’t looking at the class itself, and it had an innocent enough name. It was a source control smell.

I was looking through source control for something unrelated, and for some reason this class caught my eye. I took a look at its version history and saw that it had been introduced and quickly changed dozens of times by several people. Even assuming that you like to check code in frequently, it’ll take you what, one or two checkins to write a class, and then maybe a few more to tweak its interface or optimize its internals, if need be? Why were there all these cooks in this kitchen, and what were they doing in there? The changes to the class continued like that over the course of time, with periods of few changes, and then bouts of frenzied, distributed activity. How strange.

I opened the class itself to take a look, and the source control checkin pattern immediately made sense to me. The class was a giant static class that did nothing but instantiate ICommand implementations and expose them as public fields. I haven’t tagged this class with C# for a reason — this anti-pattern transcends languages — so, I’ll briefly explain for non C# savvy readers. ICommand is an interface that defines Execute and CanExecute methods – sort of like a command pattern command, but without forcing implementation of UnExecute() or some such thing.

So, what was going on here was that this class was an already-large and robustly growing dumping repository for user actions in the application. Each property in the class was an instance of an ICommand implementation that allowed specifying delegates for the Execute and CanExecute methods, so they looked something like this:


public class ApplicationCommands
{
  public static readonly ICommand FooComand = new Command(Execute, CanExecute); //Command accepts delegate xtor params

  public static void Execute()
  {
   //Do some foo thing, perhaps with shared state (read: global variables) in the class
  }

  public static bool CanExecute()
  {
    //return a boolean based on some shared state (read: global variables) in the class
  }

 //Rinse, repeat, dozens and dozens, and probably eventually hundreds and thousands of times.
}

So, the ‘pattern’ here is that if you want to define some new user action, you open this class and find whatever shared global state will inform your command, and then add a new command instance field here.

A Suitable Metaphor

I try to stay away from the vulgar in my posting, so I beg forgiveness in advance if I offend, but this reminds me most of an outhouse at a campground. It’s a very public place that a lot of people are forced to stop by and… do their business. The only real reason to use it is that a some people would probably be upset and offended if they caught you not using it (e.g. whoever wrote it), as there is no immediately obvious benefit provided by the outhouse beyond generally approved decorum. Like the outhouse, over time it tends to smell worse and worse and degenerate noticeably. If you’re at all sensitive to smells, you probably just go in, hold your nose, and get out as quickly as possible. And, like the high demand outhouse, there are definitely “merge conflicts” as people are not only forced to hold their noses, but to wait in line for the ‘opportunity’ to do so.

There are some periodic attempts to keep the outhouse clean, but this is very unpleasant for whoever is tasked with doing so. And, in spite of the good intentions of this person, the smell doesn’t exactly go away. Basic etiquette emerges in the context of futile attempts to mitigate the unpleasantness, such as signs encouraging people to close the lid or comments in the code asking people to alphabetize, but people are generally in such a hurry to escape that these go largely unheeded.

At the end of the day, the only thing that’s going to stop the area from being a degenerative cesspool is the removal of the outhouse altogether.

Jeez, What’s So Bad About This

The actual equivalence of my metaphor is exaggerating my tone a bit, but I certainly think of this as an anti-pattern. Here’s why.

  1. What is this class’s single responsibility? The Single Responsibility Principle (SRP) wisely advises that a class should have exactly one reason to change. This class has as many reasons as it has commands. And, you don’t get to cop out by claiming that its single responsibility is to define and store information about everything a user might ever want to do (see God Class).
  2. This class is designed to be modified each and every time the GUI changes. That is, this thing will certainly be touched with every release, and it’ll probably be touched with every patch. The pattern here is “new functionality — remember to modify ApplicationCommands.cs”. Not only is that a gratuitous violation of (even complete reversal of) the Open/Closed Principle, but if you add a few more outhouses to the code base, people won’t even be able to keep track of all the changes that are needed to alter some small functionality of the software.
  3. Dependency inversion is clearly discouraged. While someone could always access the static property and inject it into another class, the entire purpose of making them public fields in a static class is to encourage inline use of them as global ‘constants’, from anywhere. Client code is not marshaling all of these at startup and passing them around, but rather simply using them in ad-hoc fashion wherever they are needed.
  4. Besides violating S, O, and D of SOLID, what if I want a different command depending on who is logged in? For example, let’s say that there is some Exit command that exits the application. Perhaps I want Exit command to exit if read-only user is logged in, but prompt with “are you sure” for read-write user. Because of the static nature of the property accessors here, I have to define two separate commands and switch over them depending on which user is logged in. Imagine how nasty this gets if there are many kinds of users with the need for many different commands. The rate of bloat of this class just skyrocketed from linear with functionality to proportional to (F*R*N) where F is functionality items, R is number of roles and N is nuance of behavior in roles.
  5. And, the rate of bloat of the client classes just went up a lot too, as they’ll need to figure out who is logged in and thus which of the commands to request.

  6. And, of course, there is the logistical problem that I noticed in the first place. This class is designed to force multiple developers to change it, often simultaneously. That’s a lot like designing a traffic light that’s sometimes green for everyone.

Alternative Design

So, how to work toward a more SOLID, flexible design? For a first step, I would make this an instance class, and hide the delegate methods. This would force clients to be decoupled from the definition, except in one main place. The second step would be to factor the commands with the more complex delegates into their own implementations of ICommand and thus their own classes. For simpler classes, these could simply be instantiated inline with the delegate methods collapsed to lambda expressions. Once everything had been factored into a dedicated class or flagged as declarable inline, the logic for generating these things could be moved into some sort of instance factory class that would create the various kinds of command. Now, we’re getting somewhere. The command classes have one reason to change (the command changes) and the factory has one reason to change (the logic for mapping requests to command instances changes). So, we’re looking better on SRP. The Open/Closed principle is looking more attainable too since the only class that won’t be closed for modification is the factory class itself as new functionality is added. Dependency inversion is looking up too, since we’re now invoking a factory that provides ICommand instances, so clients are not going out and searching for specific concrete instances on their own.

The one wrinkle is the shared global state from the previous commands static class. This would have to be passed into the factory and injected into the commands, as needed. If that starts to look daunting, don’t be discouraged. Sure, passing 20 or 30 classes into your factory looks really ugly, but the ugliness was there all along — you were just hiding it in the outhouse. Now that the outhouse is gone, you’ve got a bit of work to do shoveling the… you get the idea. Point is, at least you’re aware of these dependencies now and can create a strategy for managing them and getting them into your commands.

If necessary, this can be taken even further. To satisfy complaint (4), if that is really an issue, the abstract factory can be used. Alternatively, one could use reflection to create a store of all instances of ICommand in the application, keyed by their names. This is a convention over configuration solution that would allow clients to access commands by knowing only their names at runtime, and not anything about them beyond their interface definition. In this scheme, Open/Closed is followed religiously as no factory exists to modify with each new command. One can add functionality by changing a line of markup in the GUI and adding a new class. (I would advise against this strategy for the role/nuance based command scheme). Another alternative would be to read the commands in from meta-data, depending on the simplicity or complexity of the command semantics and how easily stored as data they could be. This is, perhaps the most flexible solution of all, though it may be too flexible and problematic if the commands are somewhat involved.

These are just ideas for directions of solution. There is no magic bullet. I’d say the important thing to do, in general, is recognize when you find yourself building an outhouse, and be aware of what problems it’s going to cause you. From there, my suggestions are all well and good in the abstract, but it will be up to you to determine how to proceed in transitioning to a better design.