DaedTech

Stories about Software

By

Are Unit Tests Worth It?

The Unit Test Value Proposition

I gave a presentation yesterday on integrating unit tests into a build. (If anyone is interested in seeing it, feel free to leave a comment, and I’ll post relevant slides to slideshare or perhaps make the power point available for download). This covered the nuts and bolts of how I had added test running to the build machine as well as how to verify that a delivery wouldn’t cause unit test failures and thus break the build. For background, I presented some statistics about unit testing and the motivations for a test-guarded integration scheme.

One of the questions that came up during Q&A was from a bright woman in the audience who asked what percentage of development time was spent writing unit tests for an experienced test writer and for a novice writer. My response to this was that it would be somewhat slower going at first, but that an experienced TDD developer was just as fast doing both as a non-testing developer in the short term and faster in the long term (less debugging and defect fixing). From my own personal experience, this is the case.

She then asked a follow up question about what kind of reduction in defects it brought, and I saw exactly what she was driving at. This is why I mentioned that she is an intelligent woman. She was looking for a snap-calculation as to whether or not this was a good proposition and worth adopting. She wanted to know exactly how many defects would be avoided by x “extra” days of effort. If 5 days of testing saved 6 days of fixing defects, this would be worth her time. Otherwise, it wouldn’t.

An Understandable but Misguided Assessment

In the flow of my presentation (which wasn’t primarily about the merits of unit testing, but rather how not to break the build), I missed an opportunity to make a valuable point. I wasn’t pressing and trying to convince people to test if they didn’t want to. I was just trying to show people how to run the tests that did exist so as not to break the build.

Let’s consider what’s really being asked here. She’s driving at an underlying narrative roughly as follows (picking arbitrary percentages):

My normal process is to develop software that is 80% correct and 20% incorrect and declare it to be done. The 80% of satisfied requirements are my development, and the 20% of missed requirements/regressions/problems is part of a QA phase. Let’s say that I spend a month getting to this 80/20 split and then 2 weeks getting the rest up to snuff, for a total of 6 weeks of effort. If I can add unit testing and deliver a 100/0 split, but only after 7 weeks then the unit testing isn’t worthwhile, but if I can get the 100/0 split in under 6 weeks, then this is something that I should do.

Perfectly logical, right?

Well, almost. The part not factored in here is that declaring software to be done when it’s 80% right is not accurate. It isn’t done. It’s 80% done and 20% defective. But, it’s being represented as 100% done to external stakeholders, and then tossed over the fence to QA with the rider that “this is ‘done’, but it’s not done-done. And now, it’s your job to help me finish my work.”

So, there’s a hidden cost here. It isn’t the straightforward value proposition that can be so easily calculated. It isn’t just our time as developers — we’re now roping external stakeholders into helping us finish by telling them that we’ve completed our work, and that they should use the product as if it were reliable when it isn’t. This isn’t like submitting a book to an editor and having them perform quality assurance on it. In that scenario, the editor’s job is to find typos and your job is to nail down the content. In the development/QA work, your job is to ensure that your classes (units) do what you think they should, and it’s QA’s job to find integration problems, instances of misunderstood requirements, and other user-test type things. It’s not QA’s job to discover an unhandled exception where you didn’t check a method parameter for null — that’s your job. And, if you have problems like that in 20% of your code, you’re wasting at least two people’s time for the price of one.

Efficiency: Making More Mistakes in Less Time

Putting a number to this in terms of “if x is greater than y, then unit testing is a good idea” is murkier than it seems because of the waste of others’ time. It gets murkier still when concepts like technical debt and stakeholder trust of developers are factored in. Tested code tends to be a source of less technical debt given that it’s usually more modular, maintainable, flexible, etc. Tested code tends to inspire more confidence in collaborators as, you may run a little behind schedule here and there, but when things are delivered, they work.

On the flipside of that, you get into the proverbial software death march, particularly in less agile shops. Some drop-dead date is imposed for feature complete, and you frantically write duct-tape software up until that date, and then chuck whatever code grenade you’re holding over the QA wall and hope the shrapnel doesn’t blow back too hard on you. The actual quality of the software is a complete mystery and it may not be remotely close to shippable. It almost certainly won’t be something you’re proud to be associated with.

One of my favorite lines in one of my favorite shows, The Simpsons, comes from the Homer character. In an episode, he randomly decides to change his name to Max Power and assume a more go-getter kind of identity. At one point, he tells his children, “there are three ways of doing things: the right way, the wrong way, and the Max Power way.” Bart responds by saying, “Isn’t that just the wrong way?” to which Homer (Max Power) replies, “yes, but faster!”

That’s a much better description of the “value” proposition here. It’s akin to being a student and saying “It’s much more efficient to get grades of C and D because I can put in 10 hours per week of effort to do that, versus 40 hours per week to get As.” In a narrow sense that’s true, but in the broader sense of efficiency at being a good student, it’s a very unfortunate perspective.  The same kind of nuanced perspective holds in software development.  Sacrificing an objective, early-feedback quality mechanism such as unit tests in the interests of being more “efficient” just means that you’re making mistakes more efficiently.  And, getting more things wrong in the same amount of time is a process bug — not a feature.

So, for my money, the idea of making a calculation as to whether or not verifying your work is worthwhile misses the point.  Getting the software right is going to take you some amount of time X.  You have two options here.  The first option is to spend some fraction of X working and then claim to be finished when you’re not, at which point you’ll spend the other portion of the fraction “fixing” the fact that you didn’t finish.  The second option is to spend the full time X getting it right.

If you set a standard for yourself that you’re only going to deliver correct software, the timelines work themselves out.  If you have a development iteration that will take you 6 weeks to get right, and the business tells you that you only get 4, you can either deliver them “all” of what they want in 4 weeks with the caveat that it’s 33% defective, or you can say “well, I can’t do that for you, but if you pick this subset of features, I’ll deliver them flawlessly.”  Any management that would rather have the “complete” software with defect landmines littering 33% of the codebase than 2/3rds of the features done right needs to do some serious soul-searching.  It’s easy to sell excellent software with the most important 2/3rds of the features and the remaining third two weeks out.  It’s hard to sell crap at any point in time.

So, the real value proposition here boils down only to “do I want to be adept at writing unreliable software or do I want to be adept at writing software that inspires trust?”

By

Forget Design Documents

Waterfalls Take Time

I sat in on a meeting the other day, and heard some discussion about late breaking requirements and whether or not these requirements should be addressed in a particular release of software. One of discussion participants voiced an opinion in the negative, citing as rationale that there was not sufficient time to generate a document with all of these requirements, generate a design document to discuss how these would be implemented, and then write the code according to this document.

This filled me with a strange wistfulness. I’m actually not kidding – I felt sad in a detached way that I can’t precisely explain.

The closest that I can come is to translate this normal-seeming statement of process into real work terms. The problem and reason that a change couldn’t be absorbed was because there was no time to take the requirements, transpose them to a new document, write another document detailing exactly what to do, execute the instructions in the second document, and continually update the second document when it turned out not to be prescient.

Or, more concisely, a lot of time was required to restate the problem, take a guess at how one would solve it, and then continually and exhaustively refine that guess.

I think the source of my mild melancholy was the sense that, not only was this wasteful, but it also kind of sucks the marrow out of life (and not in the good, poetic way). If we applied this approach to cooking breakfast, it would be like taking a bunch of fresh ingredients out of the fridge, and then getting in the car with them and going to the grocery store to use them as a shopping list for buying duplicate ingredients. Then, with all the shopping prepared, it’s time to go home, get out the pots and pans, roll up your sleeves and… leave the kitchen to sit at your desk for an hour or two slaving over a written recipe.

Once the recipe is deemed satisfactory in principle by your house guests, cooking would commence. This would involve flour, herbs spices, eggs, and of course — lots more writing. If you accidentally used a pinch instead of a dash, it’d be time to update the recipe as you cooked.

Kinda sucks the life right out of cooking, and it’s hard to convince yourself that things wouldn’t have gone better had you just cooked breakfast and jotted down the recipe if you liked it.

Why Do We Do This?

If you’ve never read Samuel Taylor Coleridge’s, “The Rime of the Ancient Mariner”, you’ve missed out on a rather depressing story and the origin of the idea of an “albatross around your neck” as something that you get saddled with that bogs you down.

Long story short, during the course of this poem, the main character kills an albatross that he superstitiously believes to be holding him back, only to have the thing forced around his neck as both a branding and a punishment for angering spiritual entities with some clout. During the course of time wearing the albatross, all manner of misfortune befalls the mariner.

In the world of software development, (and ignoring the portion of the story where the mariner shoots the albatross), some of our documentation seems to become our albatross. These documents that once seemed our friends are very much our curse — particularly the design document.

No longer do they help us clarify our vision of the software as it is supposed to look, but rather they serve as obligatory bottlenecks, causing us frequent rework and cruelly distracting us by whispering in our ears echoing chants of “CMMI” or “ISO”. Getting rid of them by simply ceasing to do them seems simple to the outsider, but is thoroughly impossible for us software mariners. We soldier on in our doomed existence, listlessly writing and doing our best to keep these documents up to date.

Let’s Just Not Do It

Things need not be this bleak. Here’s a radical idea regarding the design document — just don’t make one. Now, I know what you’re thinking — how are you going to have any idea what to do if you don’t lay it all out in a blueprint? Well, simple. You’ll use a combination of up-front whiteboard sketching and test driven development. These two items will result in both better documentation, and far less wasted time.

President Eisenhower once said, “Plans are nothing, planning is everything.” This philosophy applies incredibly well to software design.

Capturing your design intentions at every step of the way is truly meaningless. Source control captures the actual state of the code, so who cares what the intentions were at a given moment? These ephemeral notions of intent are best written and left on whiteboards in the offices of developers to help each other collaborate and understand the system.

Does the end-user care about this in the slightest? Of course not! Does anyone at all care about these once they become outdated? Nope. So, why capture and update them as you go?

Capture reality when you’re done, and only as much as-is required by a given stakeholder. The more you document superfluously, the more documents you have that can get out of date. And, when your docs get out of date, you have a situation worse than no documentation — lying documentation.

The other main component of this approach is test driven development. Test driven development forces the creation and continuous updating of arguably the best form of documentation there is for fellow developers: examples. As Uncle Bob points out in the three rules of TDD, the target audience is developers, and given the choice between paragraphs/diagrams and example code, developers will almost without exception choose example code. And, example code/API is exactly what the by-product of TDD is. It’s a set of examples guaranteed to be accurate and current.

And, what’s more, TDD doesn’t just provide unit tests and an accurate sample of how to use the software — it drives good design. A TDD code base tends to be modular, decoupled and flexible simply by the emergent design through the process. This means that TDD as you go is likely to provide you with every bit as good a design as you could hope to have by rubbing your crystal ball and pretending to know exactly what 200 classes are going to be necessary up front.

Is it Really that Simple?

Well, maybe, maybe not. The solution can certainly be just that simple – ceasing to do something you’re in the habit of doing is excellent at forcing change. However, the whiteboard culture and the art of TDD certainly require some conscious practice.

So, you may want to take both for a test drive before simply scrapping your diligent design documentation. But, personally, I’d say “jump in – the water’s fine!” I don’t think I’ve ever regretted not bothering with an ‘official’ design document. I’m happy to draw up documents to communicate the function of system at a given moment (instruction manuals, package diagrams, quick overviews, whatever), but creating them to be predictive of functionality and updating them from inception to the end of time… no thank you.

The documents that we create are designed to promote understanding and help communication — not to serve as time-sucking millstones around our necks that we cite as reasons for not providing functionality to users.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

10 Ways to Improve Your Code Reviews

For a change of pace, I thought I’d channel a bit of Cosmo and offer a numbered article today. I’ve asked by others to do a lot of code reviews lately, and while doing this, I’ve made some notes as to what works, what doesn’t, and how I can improve. Here are those notes, enumerated and distilled into a blog post.

  1. Divide and distribute. Have one person look for duplicate code chunks, one look for anti-patterns, one check for presence of best practices, etc. It is far easier to look through code for one specific consideration than it is to read through each method or line of code, looking for anything that could conceivably be wrong. This also allows some people to focus on coarser granularity concerns (modules and libraries) with other focused on finer (classes and methods). Reading method by method sometimes obscures the bigger picture and concentrating on the bigger picture glosses over details.
  2. Don’t check for capitalization, naming conventions and other minutiae. I say this not because I don’t care about coding conventions (which I kind of don’t), but because this is a waste of time. Static analysis tools can do this. Your build can be configured not to accept checkins/deliveries that violate the rules. This is a perfect candidate for automation, so automate it. You wouldn’t waste time combing a document for spelling mistakes when you could turn on spell-check, and the same principle applies here.
  3. Offer positive feedback. If the code review process is one where a developer submits code and then defends it while others try to rip it to pieces, things become decidedly adversarial, potentially causing resentment. But, even more insidiously, unmitigated negativity will tend to foster learned helplessness and/or get people to avoid code reviews as much as possible.
  4. Pair. If you don’t do it, start doing it from time to time. If you do it, do it more. Pairing is the ultimate in code review. If developers spend more time pairing and catching each other’s mistakes early, code reviews after the fact become quicker and less involved.
  5. Ask why, don’t tell what. Let’s say that someone gets a reference parameter in a method and doesn’t check it for null before dereferencing it. Clearly, this is bad practice. But, instead of saying “put a null check there”, ask, “how come you decided not to check for null — is it safe to assume that you callers never pass you null?” Obviously, the answer to that is no. And, the thing is, almost any programmer will realize this at that point and probably say “oh, no, that was a mistake.” The key difference here is that the reviewee is figuring things out on his or her own, which is clearly preferable to being given rote instruction.
  6. Limit the time spent in a single code review. Given that this activity requires collaboration and sometimes passive attention, attention spans will tend to wane. This, in turn, produces diminishing marginal returns in terms of effectiveness of the review. This isn’t rocket science, but it’s important to keep in mind. Short, focused code reviews will produce effective results. Long code reviews will tend to result in glossing over material and spacing out, at which point you might as well adjourn and do something actually worthwhile.
  7. Have someone review the code simply for first-glance readability/understanding. There is valuable information that can be mined from the reaction of an experienced programmer to new code, sight unseen. Specifically, if the reaction to some piece of the code is “what the…”, that’s a good sign that there are readability/clarity issues. The “initial impression” litmus test is lost once everyone has studied the code, so having someone capture that at some point is helpful.
  8. Don’t guess and don’t assume — instead, prove. Rather than saying things like “I think this could be null here” or “This seems like a bad idea”, prove those things. Unit tests are great. If you see a flaw in someone’s code, expose it with a failing unit test and task them with making it pass. If you think there’s a performance problem or design issue, support your contention with a sample program, blog post, or whitepaper. Opinions are cheap, but support is priceless. This also has the added benefit of removing any feelings of being subject to someone else’s whims or misconceptions.
  9. Be prepared. If this is a larger, meeting-oriented code review, the people conducting the review should have read at least some of the code beforehand, and should be familiar with the general design (assuming that someone has already performed and recorded the results from suggestion 7). This allows the meeting to run more efficiently and the feedback to be more meaningful than a situation where everyone is reading code for the first time. When this happens, things will get missed since people start to feel uncomfortable as everyone waits for them to understand.
  10. Be polite and respectful.    You would think that this goes without saying, but sadly, that seems not to be the case.  In my career, I have encountered many upbeat, intelligent and helpful people, but I’ve also encountered plenty of people who seem to react to everything with scorn, derision, or anger.  If you know more than other people, help them.  If they’re making basic mistakes, help them understand.  If they’re making the same mistakes multiple times, help them find a way to remember.  Sighing audibly, rolling your eyes, belittling people, etc, are not helpful.  It’s bad for them, and it’s bad for you.  So please, don’t do that.

Feel free to chime in with additional tips, agreements, disagreements, etc.

By

Writing Maintainable Code Demands Creativity

Writing maintainable code is for “Code Monkeys”?

This morning, I read an interesting blog post from Erik McClure. The premise of the post is that writing maintainable code is sufficiently boring and frustrating as to discourage people from the programming profession. A few excerpts from the post are:

There is an endless stream of rheteric discussing how dangerous writing clever code is, and how good programmers don’t write clever code, or fast code, but maintainable code, that can’t have clever for loop statements or fancy tricks. This is wrong – good codemonkeys write maintainable code.

and

What I noticed was that it only became intolerable when I was writing horrendously boring, maintainable code (like unit tests). When I was programming things I wasn’t supposed to be programming, and solving problems in ways I’m not supposed to, my creativity had found its outlet. Programming can not only be fun, but real programming is, itself, an art, a solution to a problem that itself embodies a certain elegance. Society in general seems to be forgetting what makes us human in the wake of a digital revolution that automatizes menial tasks. Your creativity is the most valuable thing you have.

When you write a function that does things the right way, when you refactor a clever subroutine to something that conforms to standards, you are tearing the soul out of your code. Bit by bit, you forget who you are and what the code means. The standards may enforce maintainable and well-behaved code, but it does so at the cost of your individuality. Coding becomes middle-school math, where you have to solve the same, reworded problem a hundred times to get a good grade so you can go do something else that’s actually useful. It becomes a means to an end, not an adventure in and of itself.

Conformity for the Sake of Maintainability

There seem to be a few different themes here. The first one I see is one with which I have struggled myself in the past: chafing at being forced to change the way you do things to conform to a group convention. I touched on that here and here. The reason that I mention this is the references to “[conforming] to standards” and the apparent justification of those standards being that they make code “maintainable”. The realpolitik of this situation is such that it doesn’t really matter what justification is cited (appeal to maintainability, appeal to convention, appeal to anonymous authority, appeal to named authority, appeal to threat, etc). In the end, it boils down to “because I said so”. I mention this only insofar as I will dismiss this theme as not having much to do with maintainability itself. Whatever Erik was forced to do may or may not have actually had any bearing whatsoever on the maintainability of the code (i.e. “maintainability” could have been code for “I just don’t like what you did, but I should have an official sounding reason”).

Maintainable Code is Boring Code

So, on to the second theme, which is that writing maintainable code is boring. In particular Erik mentions unit tests, but I’d hazard a guess that he might also be talking about small methods, SRP classes, and other clean coding principles. And, I actually agree with him to an extent. Writing code like that is uneventful in some ways that people might not be used to.

That is, say that you don’t perform unit tests, and you write large, coupled, complex classes and methods. When you fire up your application for the first time after a few hours of coding, that’s pretty exciting. You have no idea what it’s going to do, though the smart money is on “crash badly”. But, if it doesn’t, and it runs, that’s a heady feeling and a rush — like winning $100 in the lottery. The work is also interesting because you’re going to be spending lots of time in the debugger, writing bunches of local variables down on paper to keep them straight. Keeping track of all of the strands of your code requires full concentration, and there’s a feeling of incredible accomplishment when you finally track down that needle in-a-haystack bug after an all-nighter.

On the flip side, someone who writes a lot of tests and conforms to the clean code/craftsmanship mantra has a less exciting life. If you truly practice TDD, the first time you fire up the application, you already know it’s going to work. The lottery-game excitement of longshot odds with high payoff is replaced by a dependable salary. And, as for the maddening all-nighter bugs, those too are gone. You can pretty much reproduce a problem immediately, and solve it just as quickly with an additional failing test that you make pass. The underdog, down by a lot all game, followed by miracle comeback is replaced by a game where you’re winning handily from wire to wire. All of the roller-coaster highs and lows with their panicked all nighters and miracle finishes are replaced by you coming in at 9, leaving at 5, and shipping software on time or ahead of schedule.

Making Code Maintainable Is Brainless

The third main theme that I saw was the idea that writing clever code and maintainable code is mutually exclusive, and that the latter is brainless. Presumably, this is colored to some degree by the first theme, but on its own, the implication is that maintainable code is maintainable because it is obtuse and insufficient to solve the problem. That is, instead of actually solving problems that they’re tasked with, the maintainable-focused drones oversimplify the problem and settle for meeting most, if not all of the requirements of the software.

I say this because of Erik’s vehement disagreement with the adage that roughly says “clever code is bad code”. I’ve seen this pithy expression explained in more detail by people like Uncle Bob (Robert Martin) and I know that it requires further explanation because it actually sounds discouraging and draconian stated simply. (Though, I think this is the intended, provocative effect to make the reader/listener demand an explanation). But, taken at face value I would agree with Erik. I don’t relish the thought of being paid a wage to keep quiet and do stupid things.

Maintainability Reconsidered

Let’s pull back for a moment and consider the nature of software and software development. In his post, Erik bleakly points out that software “automatizes[sic] menial tasks”. I agree with his take, but with a much more optimistic spin — I’m in the business of freeing people from drudgery. But, either way, there can be little debate that the vast majority of software development is about automating tasks — even game development, which could be said to automate pretend playground games, philosophically (kids don’t need to play “cops and robbers” on the playground using sticks and other makeshift toys when games about detectives and mobsters automate this for them).

And, as we automate tasks, what we’re doing is taking tasks that have some degree of intrinsic complexity and falling on the grenade of that complexity so that completing the task is simple, intuitive, and even pleasant (enter user experience and graphic design) for prospective users. So, as developers, we deal with complexity so that end users don’t have to. We take complicated processes and model them, and simplify them without oversimplifying them. This is at the heart of software development and it’s a task so complex that all manner of methodologies, philosophies, technologies, and frameworks have been invented in an attempt to get it right. We make the complex simple for our non-technical end users.

Back in the days when software solved simpler problems than it does now, things were pretty straightforward. There were developers and there were users. Users didn’t care what the code looked like internally, and developers generally operated alone or perhaps in small teams for any given task. In this day and age, end-users still don’t care what the code looks like, but development teams are large, sometimes enormous, and often distributed geographically, temporally, and according to specialty. You no longer have a couple of people that bang out all necessary code for a project. You have library writers, database people, application coders, graphic designers, maintenance programmers etc.

With this complexity, an interesting paradigm has emerged. End-users are further removed, and you have other, technical users as well. If you’re writing a library or an IDE plugin, your users are other programmers. If you’re writing any code, the maintenance programmer that will come along later is one of your users. If you’re an application developer, a graphic designer is one of your users. Sure, there are still end-users, but there are more stakeholders now to consider than there used to be.

In light of this development, writing code that is hard to maintain and declaring that this is just how you do things is a lot like writing a piece of code with an awful user interface and saying to your users “what do you want from me — it works, doesn’t it?” You’re correct, and you’re destined to go out of business. If I have a programmer on my team who consistently and proudly writes code that only he understands and only he can decipher, I’m hoping that he’s on another team as soon as possible. Because the fact of the matter is that anybody can write code that meets the requirements, but only a creative, intelligent person can do it in such a way that it’s quickly understood by others without compromising the correctness and performance of the solution.

Creativity, Cleverness and Maintainability

Let’s say that I’m working and I find code that sorts elements in a list using bubble sort, which is conceptually quite simple. I decide that I want to optimize, so I implement quick sort, which is more complex. One might argue that I’m being much more clever, because quick sort is a more elegant solution. But, quicksort is harder for a maintenance programmer to grasp. So, is the solution to leave bubble sort in place for maintainability? Clearly not, and if someone told Erik to do that, I understand and empathize with his bleak outlook. But then, the solution also isn’t just to slap quicksort in and call it a day either. The solution is to take the initial implementation and break it out into methods that wrap the various control structures and have descriptive names. The solution is to eliminate one method with a bunch of incrementers and decrementers in favor of several with more clearly defined scope and purpose. The solution is, in essence, to teach the maintenance programmer quicksort by making your quicksort code so obvious and so readable that even the daft could grasp it.

That is not easy to do. It requires creativity. It requires knowing the subject matter not just well enough to get it right through trial and error, not just well enough to know it cold and not just well enough to explain it to the brightest pupil, but well enough that your explanation shines through the code and is unmistakable. In other words, it requires creativity, mastery, intelligence, and clarity of thought.

And, when you do things this way, unit tests and other ‘boring’ code become less boring. They let you radically alter the internal mechanism of an algorithm without changing its correctness. They let you conduct time trials on the various components as you go to ensure that you’re not sacrificing performance. And, they document further how to use your code and clarify its purpose. They’re no longer superfluous impositions but tools in your arsenal for solving problems and excelling at what you do. With a suite of unit tests and refactored code, you’re able to go from bubble sort to quick sort knowing that you’ll get immediate feedback if something goes wrong, allowing you to focus exclusively on a slick algorithm. They’ll even allow you to go off tilting at tantalizing windmills like an unprecedented linear time sorting — hey, if the tests run in N time and all go green, you’re up for some awards and speaking engagements. For all that cleverness, you ought to get something.

So Why is “Cleverness” Bad?

What do they mean that cleverness is bad, anyway? Why say something like that? The aforementioned Bob Martin, in a video presentation I once watched, said something like “you know you’re looking at good code when someone reading it is saying ‘uh-huh’, ‘uh-huh’, ‘yep’, ‘makes sense’, ‘uh-huh'”. Contrast this with code that you see where your first reaction is “What on Earth…?!?” That is often the reaction to non-functional or incorrect code, but it is just as frequently the reaction to ‘clever’ code.

The people who believe in the idea of avoiding ‘clever’ code are talking about Rube-Goldbergian code, generally employed to work around some hole in their knowledge. This might refer to someone who defines 500 functions containing 1 through 500 if statements because he isn’t aware of the existence of a “for” loop. It may be someone who defines and heavily uses something called IndexedList because he doesn’t know what an array is. It may be this, this, this or this. I’ve seen this in the wild, where I was looking at someone’s code and I said to myself, “for someone who didn’t know what a class is, this is a fairly clever oddball attempt to replicate the concept.”

The term ‘clever’ is very much tongue-in-cheek in the context of clean coding and programming wisdom. It invariably means a quixotic, inferior solution to a problem that has already been solved and whose solution is common knowledge, or it means a needlessly complex, probably flawed way of doing something new. Generally speaking, the only person who thinks that it is actually clever, sans quotes, is the person who did it and is proud of it. If someone is truly breaking new ground, that solution won’t be described as ‘clever’ in this sense, but probably as “innovative” or “ground-breaking” or “impressive”. Avoiding ‘clever’ implementations is about avoiding pointless hacks and bad reinventions of the wheel — not avoiding using one’s brain. If I were coining the phrase, I’d probably opt for “cute” over “clever”, but I’m a little late to the party. Don’t be cute — put the considerable brainpower required for elegant problem solving to problems that are worth solving and that haven’t already been solved.

By

Stop Online Piracy Act (SOPA): History Repetition for the Doomed

What is SOPA?

For anyone not familiar with the Stop Online Piracy (SOPA) act, a brief primer on it can be found on wikipedia. It is an act being considered by the United States Congress that would allow websites accused (not found guilty of, but accused) of engaging in piracy, promoting piracy, or simply creating an environment where piracy could theoretically take place from being blacklisted in DNS. For those not familiar with DNS, it is the ubiquitous service that takes a 12 number IP address and resolves it to a pretty URL like https://daedtech.com. Under SOPA, if someone accused daedtech of supporting piracy, typing the URL into your browser would stop working, searching for it on google would stop working, and pretty much everything about the site would stop working. Daedtech would be guilty until proven innocent.

The bill is driven by intentions that are reasonable (but obsolete, as I’ll argue later) — to protect intellectual property. It’s aimed at stopping someone from bootlegging movies or music and setting up a website to share the plunder with legions of would-be pirates. But the approach that they’re taking is a bit like cutting off your leg to address a splinter in your pinky toe. I say this because “guilty until proven innocent” tends to result in paradigms more like the Salem Witch Trials and McCarthyism than it does in reasoned, judicious use of preventative force. In the current system, people are on their honor not to be criminals, which is how most of society works. With a “guilty until proven innocent” system, the system is on its honor not to abuse its power, which is how scenarios like that in 1984 get started. I submit that the only thing less trustworthy than John Q Public to be on his honor is The Prince being trusted on his honor. But, I digress.

The end result is that the internet as a boundless repository for all manner of ideas, big and weird, all forms of collaboration, and all kinds of expression, from the disgusting to the divine, would be fundamentally altered, at least in the short term. It would cease to be a sometimes scary, under-policed realm and would, instead, become a “walled garden”. And the walls would be manned by corporations like the RIAA, MPAA, large software companies, and generally speaking, armies and armies of lawyers. Nevermind that none of these organizations save the software companies are particularly expert in the workings of the internet or technology – that isn’t the point. The point is to make sure that the internet is a place for free expression… so long as that expression doesn’t interfere with certain revenue streams. This is entirely rational (I sympathize with any entity protecting its own bottom line) on the part of the would-be police, but to call it a gutshot to the spirit of the internet would be an understatement.

How did it Come to This?

I’ll get to that in a moment, but first, let’s consider a term that most people have probably heard once or twice: Luddite. In most contexts, people use this to describe those afraid of and/or suspicious of new technology. So, it may describe a friend of yours that refuses to buy a cell phone and bemoans that people constantly use them. Or, it may describe a relative who has been writing letters all his life and isn’t about to stop now for some kind of “email fad”. But, the term Luddite arises from a very real historic scenario, with a subtlety different and richer message.

In the late 1700’s and early 1800’s the French Revolution and ensuing Napoleonic Wars set the stage for rough economic times by virtue of Europe-wide destruction and devastation. Add to this an Industrial Revolution in full swing, replacing individual artisans with machines, and you had a perfect storm for unhappy laborers. A group of increasingly displaced, unskilled textile workers weren’t going to take their obsolescence lying down – they organized into a paramilitary group and executed a sustained, well organized temper tantrum in which they destroyed machines like looms and mills that were replacing and obviating their manual labor.

This went on for a few years, and has the kind of ending that only a Bond villain could love. The British government smashed the movement, executed key participants, and progress marched on, as it inevitably does. This is indeed a story with no real hero. Instead of adapting or undertaking a campaign of ideas, a group of people attempted futilely to “stop the motor of the world”, and, in response, another group of people broke their spines. The theme of 2 potentially sympathetic interests acting in a way to render their plights completely unsympathetic via collateral damage is worth noting.

Ludditism Through the Years

The Luddites gave a name to an age old phenomenon (albeit one that is seen more often as the pace of technological advancement increases): man fighting in vain against his own obsolescence. Here’s an interesting look at the way that this can affect just one particular industry:

  1. Late 1800’s: Composers worry that the phonograph, which allows songs to be recorded in short bursts, will obsolete the musical performance and the performance of long elaborate symphony
  2. Early 1900’s: Record (phonograph “cylinder” producers) worry that the advent of radio will destroy their livelihood
  3. Mid 1900’s: Radio industry worries that the advent of television will destroy their business
  4. 1980’s: Record labels worry that the tape, which allows home recording, will devastate their business
  5. 1990’s: Record labels worry that the CD burner will devastate their business
  6. 2000’s: Record labels worry that the internet and streaming will devastate their business

Now for the most part, these concerns and others like them were “Luddite” in the common, vernacular, anti-technology sense, rather than in the scorched Earth sense of the actual Luddite movement. The cycle is predictable and repeatable – new thing comes out, current top dog is worried. Sometimes current top dog adjusts and adapts (radio), and some times he dies out (phonograph, cassette tape). But whatever happens to top dog, progress marches inexorably onward.

Adapting By Anachronism

In the list above, you’ll notice a constant over the last 30 years – the RIAA is afraid of and upset about everything that happens. Previously, the cycle involved some new medium replacing or gaining foothold beside an existing one. And, that continued through the entire list, but the only outlier is that the same entity is now afraid of everything that happens. Why is this…?

I would argue that it’s because record labels themselves are anachronistic throwbacks that exist only because of previous power and influence. They are exclusively in the business of keeping themselves in business via influence peddling and manipulation, providing no relevant product or service. Let’s consider the history of music. For basically all of recorded history until very recently, musicians were exclusively performance artists, like jugglers or an acting troupe. They sold services, not consumable goods. Wandering minstrels, court composers, and random people with good voices played for whoever, royalty, and friends/family, respectively. Musicians occupied a place in society that ranged from plebian to whatever equivalent for “professional” or “middle class” existed at that place in history.

This all changed when the invention of recorded music turned music into a commodity good rather than a service. At first, the change was subtle, but as music changed from something people had to make for themselves or their neighbors to a specialized good, a dramatic new paradigm emerged. Companies realized that they could leverage economies of scale to bring the same music into every living room in America or even the world, and they subsequently realized the star power that came with doing such a thing. Court Composers like Mozart had nothing on Elvis Presley. The upper crust had to go to see the famous Mozart perform, if they could get a ticket, but anyone with a dollar could enjoy The King (title very appropos here). Record labels went from selling vinyl discs with music on them to selling dreams, and musicians went from travelling charlatans, court employees, or volunteer music lovers to Kings (Elvis Presley) and Gods (Beatles being “more popular than Jesus”) overnight, in a historical context.

With this paradigm shift, the makers of music had unprecedented power and influence via money and widespread message, and the facilitators of this, record labels, had control over that message and influence. It was the golden age of music as a commodity, where its participants were kings and kingmakers, and the audience served as their subjects. All of this was made possible by economies of scale. Not just anyone had the ability to record music and distribute it nationally or internationally. That required expensive equipment, connections, and marketing prowess. The kings were beholden to the kingmakers to anoint them and grant them stardom. The subjects were beholden to the kingmakers to bring them their kings, what with the everyman musician being a casualty to this form of specialized labor (meaning the days where someone in every family could play a mean piano were now obsolete since music could be purchased for a buck an album).

But then a funny thing started to happen. The everyman musician started to make a comeback, not only for the love of the music, but also for the lottery-dream of being anointed as a king. Generations of children grew up listening to Elvis, The Beatles, Led Zeppelin, Michael Jackson, etc, and wanting the lifestyle of untold riches and adulation. Home music making equipment proliferated and improved and rendered the studio a luxury rather than a necessity. The internet exploded onto the scene and suddenly distribution channels were irrelevant. Sites like youtube and twitter emerged in “web 2.0” and now even marketing was available to the masses. Every service provided by the kingmakers when they were making kings was now available to the unswashed, plebian masses. Music is very much coming full circle, back to the point where heavily marketed, auto-tuned pop stars who play no instrument are unnecessary and the average person, with a computer, an instrument or his or her vocals, is perfectly capable of producing and sharing music. And, that situation is back with a vengeance since that person is now also capable of sharing that music with the world. Untold riches, record executives, destroyed hotel rooms and temper tantrums aren’t necessary – just an internet connection and an idea.

The only thing that the kingmakers have left as this point is the accumulated money, power and influence from bygone glory days. They are obsolete. And, like anyone concerned about his own obsolescence, they have entered the camps of the Luddites. At first, it was the vernacular ludditism – whining vociferously about new technology dooming them, spreading FUD about the technology, etc. But now, to tie it all back to SOPA, they are taking the step of being actual, Scorched Earth Luddites. SOPA isn’t actually about stopping piracy. It’s about destroying the thing that has removed their authority to make kings – the thing that has revealed them as unnecessary, out of touch emperors wearing no clothes. SOPA isn’t about fairness – it’s about breaking into the metaphorical mill and taking a sledge hammer to the metaphorical loom. It’s about destroying the thing that is replacing them: the internet.

The music industry is perhaps the most obvious example, but this applies other places as well, in that the internet’s very distributed, collaborative nature has created a cabal of outfits that wish the internet would go away. Book publishers (again, distribution economies of scale), older gaming companies, the movie industry, etc are all in relatively similar boats where instant, massive collaboration is replacing their purpose for existing.

Vindictive and Scary, but Doomed to Failure

There is one key difference with today’s Luddites versus the original ones. The original ones were opposed by their government and eventually smashed by it. Today’s Luddites are using lobbyist influence in an attempt to purchase the government and use it to execute the destruction of the looms. Instead of oppositional forces, the government and the Luddites are threatening to team up to unleash destruction. But in the end, it doesn’t matter. Government or not, Luddites or not, progress marches onward inexorably.

We as techies and most people as the consuming republic will gnash our teeth and (rightly so) do everything we can to prevent the destruction of the Luddites, but we may ultimately fail to stop them in a specific equipment smashing. But, what they can’t smash is the fact that the technology has been discovered and used and can be re-created, modified, adapted and perfected. At some point, their sledgehammers will dull, their resolve will weaken, and they’ll be relegated to the dustbin of history.