DaedTech

Stories about Software

By

Betrayed by Your Test Runner

The Halcyon Days of Yore

I was writing some code for Apex this morning and I had a strange sense of deja vu. Without going into painful details, Salesforce is a cloud-based CRM solution and Apex is its proprietary programming language that developers can use to customize their applications. The language is sort of reminiscent of a stripped-down hybrid of Java and C#. The IDE you use to develop this code is Eclipse, armed with an Apex plugin.

The deja vu that I experienced transported me back to my college days working in a 200 oldschoollinuxlevel computer systems course where projects assigned to us were the kind of deal involving profs/TAs writing 95% of the code and we filled in the other 5%. I am always grateful to my alma mater for this since one of the things most lacking in university CS education is often concepts like integration and working on large systems. In this particular class, I was writing C code in pico and using a makefile to handle compiling and linking the code on a remote server. This generally took a while because of network latency, server business, it being 13 years ago, a lot of files to link, etc. The end result was that I would generally write a lot of code, run make, and then get up and stretch my legs or get a drink or something, returning later to see what had happened.

This is what developing in Apex reminds me of. But there’s an interesting characteristic of Apex, which is that you have to write unit tests, they have to pass, and they have to cover something like 70% of your code before you’re allowed to run it on their hardware in production. How awesome is that? Don’t you sometimes wish C# or Java enforced that on your coworkers that steal in like ninjas and break half the things in your code base with their checkins? I was pumped when I got this assignment and set about doing TDD, which I’ve done the whole time. I don’t actually know what the minimum coverage is because I’ve been at 100% the entire time.

A Mixed Blessing?

One of the first things that I thought while spacing out and waiting for compile was how much it reminded me of my undergrad days. The second thing I thought of, ruefully, was how much better served I would have been back then to know about unit tests or TDD. I bet that could have same me some maddening debugging sessions. But then again, would I have been better off doing TDD then? And, more interestingly, am I better off doing it now?

Anyone who follows this blog will probably think I’ve flipped my lid and done a sudden 180 on the subject, but that’s not really the case. Consider the following properties of Apex development:

  1. Sometimes when you save, the IDE hangs because files have to go back to the server.
  2. Depending on server load, compile may take a fraction of a second or up to a minute.
  3. It is possible for the source you’re looking at to get out of sync with the feedback from the compiling/testing.
  4. Tests in a class often take minutes to run.
  5. Your whole test suite often takes many, many minutes to run.
  6. Presumably due to server load balancing, network latency and other such factors, feedback time appears entirely non-deterministic.
  7. It’s normal for me to need to close Eclipse via task manager and try again later.

Effective TDD has a goal of producing clean code that clearly meets requirements at the unit level, but it demands certain things of the developer and the development environment.  It is not effective when the feedback loop is extremely slow (or worse, inaccurate) since TDD, by its nature, requires near constant execution of unit tests and for those unit tests to be dependable.

Absent that basic requirement, the TDD practitioner is faced with a conundrum.  Do you stick to the practice where you have red (wait 2 minutes), green (what was I doing again, oh yeah, wait 3 minutes), refactor (oops, I was reading reddit and forgot what I was doing)?  Or do you give yourself larger chunks of time without feedback so that you aren’t interrupted and thrown out of the flow as often?

My advice would be to add “none of the above” to the survey and figure out how to make the feedback loop tighter.  Perhaps, in this case, one might investigate a way to compile/test offline, alter the design, or to optimize somehow.  Perhaps one might even consider a different technology.  I’d rather switch techs than switch away from TDD, myself.  But in the end, if none of these things proves tenable, you might be stuck taking an approach more like one from 20+ years ago: spend a big chunk of time writing code, run it, write down everything that went wrong, and trying again.  I’ll call this RDD — restriction driven development.  I’d say it’s to be avoided at all costs.

I give force.com an A for effort and concept in demanding quality from developers, but I’d definitely have to dock them for the implementation since they create a feedback loop that actively discourages the same.  I’ve got my fingers crossed that as they expand and improve the platform, this will be fixed.

By

Access To Access Without Access

Sorry about the title, but I just couldn’t resist…

Anyway, this morning, I needed to open a MS Access 2010 file (.accdb extension), but I don’t have any version of Access installed or handy, except for Access 2000, which is obviously going to be a little behind the times. Not one to let that stop me from looking at the Access file in question, I consulted google, hoping for a free and/or open source solution. The internet did not disappoint.

I came across this link: http://www.alexnolan.net/software/mdb_viewer_plus.htm

From this site, you can download MDBPlus.exe, which is a free utility that allows you to view MDB files without having Access installed. You can see screenshots at the link, so I won’t bother to post them here, but it has a nice WPF-feel GUI and is pretty slick in general. Best of all, there’s no MSI or installer–it’s very lightweight. Just an executable that you download, and you’re set. There may be a .NET framework requirement, but I think I have 3.5 at least installed here, so I wasn’t prompted for it if there was. My compliments to Alex Nolan, the author of the utility.

When I first opened the Access DB file in question, I got an error message about needing a provider. This is a relatively recently formatted/clean-installed Windows machine, so I realized that I didn’t have the MS Access database engine installed. A quick trip to Microsoft’s site did the trick.

Those easy two steps, and I had my Access database open and was perusing the tables.

By

A Small, Functional Makefile

I don’t write C++ all that often these days, but I suppose that I spent so many years with it that it never really feels foreign to me when I come back to it. What does oftentimes feel foreign is generating a Makefile when developing in Linux. I’ve made enough of them over the years that I know what I’m doing but not so many that I can go off the cuff after six months or a year break from them.

So I’m sticking a sample Makefile here. This is somewhat for me to refer back to whenever I need to, but I’ll also explain some of the basics. In this example, I’m creating a little C++ application for calculating the odds of poker hands, given what is on the table at the moment. At the time of writing, the example, in its infancy, has only one class: Card. So the files at play here are card.h, card,cc and main.cpp. The main class references card.cpp, which, in turn, references its class definition header file, card.h.

all: oddscalc

card.o: card.cpp

	g++ -Wall -c -o card.o card.cpp

main.o: main.cpp
	g++ -Wall -c -o main.o main.cpp

oddscalc: card.o main.o

	g++ card.o main.o -o oddscalc

clean: 
	rm -f *.o oddscalc

So there’s the simple Makefile. If you take a look at this, the main purpose of the Makefile is, obviously, to compile the source, but also to automate linking so that, as projects grow, you don’t have increasingly unwieldy g++ command line statements. So we define a few Makefile rules. First, card.o is generated by compiling card.cc. Second, main.o is generated by compiling main.cpp. The executable is generated by linking the two object files, and “all” is the executable.

That’s all well and good, but I can eliminate some duplication and make this more configurable. I’ll use Makefile variables so that I don’t have to repeat things like “card,” “oddscalc,” and “g++” everywhere.

In addition, I can see the inevitable redundancy coming from our previous Makefile. As soon as I add hand.cpp/hand.h and deck.cpp/deck.h, I’m going to have to create rules for them as well. Well, I don’t want to do that, so I’m introducing a scheme that, in essence, says, “compile every .cpp file I give you into a .o file and link it in the main assembly.” This will be expressed with a “.cpp.o” rule.

#Defines
CC=g++
CFLAGS=-c -Wall
EXE=oddscalc
SOURCES=main.cpp card.cpp
OBJECTS=$(SOURCES:.cpp=.o)

#Build Rules

all: $(SOURCES) $(EXE)

.cpp.o:
	$(CC) $(CFLAGS) $< -o $@

$(EXE): $(OBJECTS)

	$(CC) $(OBJECTS) -o $(EXE)

clean: 
	rm -f *.o oddscalc

With this Makefile, if I want to add a new class, all I need to do is add the class's .cpp file to the "SOURCES" definition line and it will get compiled and linked for the application. (Well, obviously, I need to write the class as well, but we're just talking about the Makefile here.)

So that's it. There are a lot of things you can do with Makefiles. Some people create a variety of build configurations. "make tar" is a popular option as well. But I think that this Makefile is relatively simple and elegant, and it's easy to add to.