High Stakes Programming by Coincidence
Editorial note: I originally wrote this post for the Infragistics blog. Click here and check out the original at their site. There are a number of authors worth checking out that write for them.
Have you ever found yourself running your code to test out some behavior when you noticed something unrelated and thought, “that’s odd?” Maybe you wanted to verify that clicking “run” kicked off the process it was supposed to, but you noticed that the “cancel” button was randomly green for a second when the window opened. “That’s odd,” you thought.
After verifying that the process was kicked off properly, maybe you re-launch the application to see about that odd, green cancel button. But when you open the window this time, nothing. It’s the normal color, with no sign of the green you noticed before. Again, you thought, “that’s odd.” And maybe, at that point, you shrugged, chalked it up to some weird OS rendering burp, and moved on.
You never knew why the button went green, and you never knew why you couldn’t reproduce it. This is a relatively benign version of the phenomenon known as “programming by coincidence.”
“Programming by coincidence” was coined in the book, “The Pragmatic Programmer,” and they define it as “[relying] on luck and accidental successes” rather than programming deliberately. In the case of the mysteriously green button, the accidental success is the fact that the problem just kind of vanished on its own.
It’s probably no great surprise that the Pragmatic Programmer’s stance on programming by coincidence is “don’t do it.” It’s my stance as well, and I imagine a lot of you reading agree. And yet, it’s something we’ve probably all been guilty of at one time or another.
Programming By Coincidence FTW?
After all, “don’t program by coincidence” is a great operating principle, but we’ve all been at the office late and in “beggars can’t be choosers” mode. Everyone else has long since gone home, and you’re impossibly frustrated by a persistent bug with an expected fix timeline of “yesterday.” All out of bright ideas, you add a random call to log an empty string and, lo, a miracle! The bug is fixed! You run the app a few more times to make sure, and sure enough, still working. You delete the logger call, and the bug comes back. Put the logger call back in, bug goes way.
So you commit it and go home.
It’s 9:30, you haven’t eaten, and you just. don’t. care. anymore. Logging an empty string as a fix for the bug makes no sense, but, whatever, you’re going home. You check it in and call it a night, throwing the “don’t program by coincidence” aphorism out the window.
Of course, you have the best of intentions to come back in and solve the mystery the next day. But by the time you get in the next morning, there are various other fires that need putting out, and you never revisit the matter. The logger ‘fix’ ships and, no doubt comes back to haunt some poor sap years after you’ve left the company.
The Real Issue
It’s easy to say that the moment of wrong decision occurred when you committed the changes without understanding why they worked. That’s probably conventional wisdom, but I, personally, see a little more nuance to the issue. It’s pretty understandable to go with a solution that you’ve seen work, even if you’re not fully sure why, so I think the moment of trying something that makes no sense shares some of the blame.
Make no mistake — I’m not saying that you shouldn’t try weird things or throw the occasional Hail Mary. After all, as Mark Twain once quipped, accident is the greatest of all inventors. But you should take this action with situational awareness of the programmer moral hazard that may occur. If your zany scheme actually works, you’re in something of a bind now, aren’t you?
In fact, I would argue that the following is a forced ranking of the best possible outcomes of trying something that shouldn’t work, like logging an empty string:
- It doesn’t work at all.
- It works unreliably.
- It fixes the problem.
I know. It seems strange to say that fixing the bug is the worst outcome, but, remember the precondition here is that you’re trying something that you don’t expect to work. The ranking looks a lot different when you implement a solution that should work. But when you try something that shouldn’t work, and it does, you’re kind of worse off than you were originally.
To really drive this point home, let’s raise the stakes a little. You’re no longer sitting at the office programming, but laying in your bed sleeping. Suddenly, you wake up to the piercing sound of… something. You stumble out of bed and see that the source of the alarm is your carbon monoxide detector, presumably warning you of a carbon monoxide leak in your house. Yikes!
In disbelief that this is actually happening, you decide to implement the classic engineering solution of “slap it and see what happens.” Now, this, right here, is where things get dicey. Should this work? Do you trust it if it does? Let’s revisit the previous forced ranking in this context.
- Slapping the detector doesn’t do anything. Good. You should call the fire department anyway.
- Slapping it shuts it off for a moment, but then it starts up again. Now you’re less sure about calling the fire department, but you do it anyway. Good.
- Slapping it shuts it off for good. So, great. What now? Do you go back to sleep? Really?!
With your life on the line, you’re probably going to make very sure you understand the actual cause of the issue. And, so, in this high stakes context, it’s easy to see that the worst case scenario for getting to the bottom of an issue is when some stupid half-measure makes the problem go away. When it matters, you won’t be satisfied with that, so you might as well not bother with the half measure in the first place.
Certainly not every software bug is as critical as your carbon monoxide detector going off, but hopefully the high stakes troubleshooting drives home a point. Programming by coincidence is never a path to awesome outcomes. At best, and also kind of at worst, you’re earning a temporary reprieve in exchange for long term uncertainty about your code’s viability. So think of your carbon monoxide detector the next time you’re throwing a Hail Mary at a bug, and ask yourself whether it’s worth it.