DaedTech

Stories about Software

By

TDD: Simplest is not Stupidest

Where the Message Gets Lost In Teaching TDD

I recently answered a programmers’ stackexchange post about test-driven development. (As an aside, it will be cool when me linking to a SE question drives more traffic their way than them linking to me drives my way 🙂 ). As I’m wont to do, I said a lot in the answer there, but I’d like to expand a facet of my answer into a blog post that hopefully clarifies an aspect of Test-Driven Development (TDD) for people–at least, for people who see the practice the way that I do.

One of the mainstays of showcasing test-driven development is to show some extremely bone-headed ways to get tests to pass. I do this myself when I’m showing people how to follow TDD, and the reason is to drive home the point “do the simplest thing.” For instance, I was recently putting together such a demo and started out with the following code:

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void IsEven_Returns_False_For_1()
{
    var inspector = new NumberInspector();
 
    Assert.IsFalse(inspector.IsEven(1));
}

public class NumberInspector
{
    public bool IsEven(int target)
    {
        return false;
    }
}

This is how the code looked after going from the “red” to the “green” portion of the cycle. When I used CodeRush to define the IsEven method, it defaulted to throwing NotImplementedException, which constituted a failure. To make it pass, I just changed that to “return false.”

The reason that this is such a common way to explain TDD is that the practice is generally being introduced to people who are used to approaching problems monolithically, as described in this post I wrote a while back. For people used to solving problems this way, the question isn’t, “how do I get the right value for one,” but rather, “how do I solve it for all integers and how do I ensure that it runs in constant time and is the modulo operator as efficient as bit shifting and what do I do if the user wants to do it for decimal types should I truncate or round or throw an exception and whoah, I’m freaking out man!” There’s a tendency, often fired in the forge of undergrad CS programs, to believe that the entire algorithm has to be conceived, envisioned, and drawn up in its entirety before the first character of code is written.

So TDD is taught the way it is to provide contrast. I show people an example like this to say, “forget all that other stuff–all you need to do is get this one test passing for this one input and just assume that this will be the only input ever, go, now!” TDD is supposed to be fast, and it’s supposed to help you solve just one problem at a time. The fact that returning false won’t work for two isn’t your problem–it’s the problem of you forty-five seconds from now, so there’s no reason for you to bother with it. Live a little–procrastinate!

You refine your algorithm only as the inputs mandate it, and you pick your inputs so as to get the code doing what you want. For instance, after putting in the “return false” and getting the first test passing, it’s pretty apparent that this won’t work for the input “2”. So now you’ve got your next problem–you write the test for 2 and then you set about getting it to pass, say with “return target == 2”. That’s still not great. But it’s better, it was fast, and now your code solves two cases instead of just the one.

Running off the Rails

But there is a tendency I think, as demonstrated by Kristof’s question, for TDD teachers to give the wrong impression. If you were to write a test for 3, “return target == 2” would pass and you might move on to 4. What do you do at 4? How about “return target == 2 || target == 4;”

So far we’ve been moving in a good direction, but if you take off your “simplest thing” hat for a moment and think about basic programming and math, you can see that we’re starting down a pretty ominous path. After throwing in a 6 and an 8 to the or clause, you might simply decide to use a loop to iterate through all even numbers up to int.MaxValue, or-ing a return value with itself to see if target is any of them.

public bool IsEven(int target)
{
    bool isEven = false;
    for (int index = 0; index < int.MaxValue - 1; index += 2)
        isEven |= target == index;
    return isEven;
}

Yikes! What went wrong? How did we wind up doing something so obtuse following the red-green-refactor principles of TDD? Two considerations, one reason: "simplest" isn't "stupidest."

Simplicity Reconsidered

The first consideration is that simple-complex is not measured on the same scale as stupid-clever. The two have a case-by-case, often interconnected relationship, but simple and stupid aren't the same just as complex and clever aren't the same. So the fact that something is the first thing you think of or the most brainless thing that you think of doesn't mean that it's the simplest thing you could think of. What's the simplest way to get an empty boolean method to return false? "return false;" has no branches and one hardcoded piece of logic. What's the simplest way that you could get a boolean method to return false for 1 and true for 2? "return target == 2" accomplishes the task with a single conditional of incredibly simple math. How about false for 1 and true for 2 and 4? "return target % 2 == 0" accomplishes the task with a single conditional of slightly more involved math. "return target == 2 || target == 4" accomplishes the task with a single conditional containing two clauses (could also be two conditionals). Modulo arithmetic is more elegant/sophisticated, but it is also simpler.

Now, I fully understand the importance in TDD of proceeding methodically and solving problems in cadence. If you can't think of the modulo solution, it's perfectly valid to use the or condition and put in another data point such as testing for IsEven(6). Or perhaps you get all tests passing with the more obtuse solution and then spend the refactor phase refining the algorithm. Certainly nothing wrong with either approach, but at some point you have to make the jump from obtuse to simple, and the real "aha!" moment with TDD comes when you start to recognize the fundamental difference between the two, which is what I'll call the second consideration.

The second consideration is that "simplest" advances an algorithm where "stupidest" does not. To understand what I mean, consider this table:

ConditionalClauseChart

In every case that you add a test, you're adding complexity to the method. This is ultimately not sustainable. You'll never wind up sticking code in production if you need to modify the algorithm every time a new input is sent your way. Well, I shouldn't say never--the Brute Forces are busily cranking these out for you to enjoy on the Daily WTF. But you aren't Brute Force--TDD isn't his style. And because you're not, you need to use either the green or refactor phase to do the simplest possible thing to advance your algorithm.

A great way to do this is to take stock after each cycle, before you write your next failing test and clarify to yourself how you've gotten closer to being done. After the green-refactor, you should be able to note a game-changing refinement. For instance:

MilestoneTddChart

Notice the difference here. In the first two entries, we make real progress. We go from no method to a method and then from a method with one hard-coded value to one that can make the distinction we want for a limited set of values. On the next line, our gains are purely superficial--we grow our limited set from distinguishing between 2 values to 3. That's not good enough, so we can use the refactor cycle to go from our limited set to the complete set.

It might not always be possible to go from limited to complete like that, but you should get somewhere. Maybe you somehow handle all values under 100 or all positive or all negative values. Whatever the case may be, it should cover more ground and be more general. Because really, TDD at its core is a mechanism to help you start with concrete test cases and tease out an algorithm that becomes increasingly generalized.

So please remember that the discipline isn't to do the stupidest or most obtuse thing that works. The discipline is to break a large problem into a series of comparably simple problems that you can solve in sequence without getting ahead of yourself. And this is achieved by simplicity and generalizing--not by brute force.

By

Productivity Add-Ins: Bruce Lee vs Batman

In the last few months, I’ve seen a number of tweets and posts decrying or at least cautioning against the use of productivity tools (e.g. CodeRush, ReSharper, and JustCode). The reasoning behind this is is generally some variant of the notion that such tools are more akin to addictive drugs than to sustainable life improvements. Sure, the productivity tool is initially a big help, but soon you’re useless without it. If, on the other hand, you had just stuck to the simple, honest, clean living of regular development, you might not have reached the dizzying highs of the drug, but neither would you have experienced crippling dependency and eventual rock bottom. The response from those who disagree is also common: insistence that use doesn’t equal dependence and the extolling of the virtues of such tools. Predictably, this back and forth usually degenerates into Apple v. Android or Coke v. Pepsi.

Before this degeneration, though, the debate has some fascinating overtones. I’d like to explore those overtones a bit to see if there’s some kind of grudging consensus to be reached on the subject (though I recognize that this is probably futile, given the intense cognitive bias of endowment effect on display when it comes to add-ins and enhancements that developers use). At the end of the day, I think both outlooks are born out of legitimate experience and motivation and offer lessons that can at least lead to deciding how much to automate your process with eyes wide open.

Also, in the interests of full disclosure, I am an enthusiastic CodeRush fan, so my own personal preference falls heavily on the side of productivity tools. And I have in the past plugged for it too, though mainly for the issue static analysis purpose rather than any code generation. That being said, I don’t actually care whether people use these tools or not, nor do I take any personal affront to someone reading that linked post, deciding that I’m full of crap, and continuing to develop sans add-ins.

The Case Against the Tools

There’s an interesting phenomenon that I’ve encountered a number of times in a variety of incarnations. In shops where there is development of one core product for protracted periods of time, you meet workaday devs who have not actually clicked “File->New” (or whatever) and created a new project in months or years. Every morning they come in at 9, every evening they punch out at 5, and they know how to write code, compile it, and run the application, all with plenty of support from a heavyweight IDE like Eclipse or Visual Studio. The phenomenon that I’ve encountered in such situations is that occasionally something breaks or doesn’t work properly, and I’ve said, “oh, well, just compile it manually from the command line,” or, “just kick off the build outside of the IDE.” This elicits a blank stare that communicates quite effectively, “dude, wat–that’s not how it works.”

When I’ve encountered this, I find that I have to then have an awkward conversation with a non-entry-level developer where I explain the OS command line and basic program execution; the fact that the IDE and the compiler are not, in fact, the same thing; and other things that I would have thought were pretty fundamental. So what has gone wrong here? I’d say that the underlying problem is a classic one in this line of work–automation prior to understanding.

AngryArchLet’s say that I work in a truly waterfall shop, and I get tired of having to manually destroy all code when any requirement changes so that I can start over. Watching for Word document revisions to change and then manually deleting the project is a hassle that I’ve lived with for one day too long, so I fire up the plugin project template for my IDE and write something that runs every time I start. This plugin simply checks the requirements document to see if it has been changed and, if so, it deletes the project I’m working on from source control and automatically creates a new, blank one.

Let’s then say this plugin is so successful that I slap it into everyone’s IDE, including new hires. And, as time goes by, some of those new hires drift to other departments and groups, not all of which are quite as pure as we are in their waterfall approach. It isn’t long before some angry architect somewhere storms over, demanding to know why the new guy took it upon himself to delete the entire source control structure and is flabbergasted to hear, “oh, that wasn’t me–that’s just how the IDE works.”

Another very real issue that something like a productivity tool, used unwisely, can create is to facilitate greatly enhanced efficiency at generating terrible code (see “Romance Author“). A common development anti-pattern (in my opinion) that makes me wince is when I see someone say, “I’m sure generating a lot of repetitive code–I should write some code that generates this code en masse.” (This may be reasonable to do in some cases, but often it’s better to revisit the design.) Productivity tools make this much easier and thus more tempting to do.

The lesson here is that automation can lead to lack of understanding and to real problems when the person benefiting doesn’t understand how the automation works or if and why it’s better. This lack of understanding leads to a narrower view of possible approaches. I think a point of agreement between proponents and opponents of tools might be that it’s better to have felt a pain point before adopting the cure for it rather than just gulping down pain medication ‘preventatively’ and lending credence to those saying the add-ins are negatively habit-forming. You shouldn’t download and use some productivity add-in because all of the cool kids are doing it and you don’t want to be left out of the conversations with hashtag #coderush

The Case for the Tools

The argument from the last section takes at face value the genuine concerns of those making it and lends them benefit of the doubt that their issue with productivity tools is truly concern for bad or voodoo automation. And I think that requires a genuine leap of faith. When it comes to add-ins, I’ve noticed a common thread between opponents of that and opponents of unit tests/TDD–often the most vocal and angry opponents are ones that have never tried it. This being the case, the waters become a little bit muddied since we don’t know from case to case if the opponent has consistently eschewed them because he really believes his arguments against them or if he argues against them to justify retroactively not having learned to use them.

And that’s really not a trivial quibble. I can count plenty of detractors that have never used the tools, but what I can’t recall is a single instance of someone saying, “man, I used CodeRush for years and it really made me worse at my job before I kicked the habit.” I can recall (because I’ve said) that it makes it annoying for me to use less sophisticated environments and tooling, but I’d rather the tide rise and lift all of the boats than advocate that everybody use notepad or VI so that we don’t experience feature envy if we switch to something else.

NorthPolePigeonThe attitude that results from “my avoidance of these tools makes me stronger” is the main thing I was referring to earlier in the post when I mentioned “fascinating overtones.” It sets the stage for tools opponents to project a mix of rugged survivalist and Protestant Work Ethic. Metaphorically speaking, the VI users of the world sleep on a bed of brambles because things like beds and not being stabbed while you sleep are for weaklings. Pain is gain. You get the sense that these guys refuse to eat anything that they didn’t either grow themselves or shoot using a homemade bow and arrow fashioned out of something that they grew themselves.

But when it comes to programming (and, more broadly, life, but I’ll leave that for a post in a philosophy blog that I will never start) this affect is prone to reductio ad absurdum. If you win by leaving productivity tools out of your Visual Studio, doesn’t the guy who uses Notepad++ over Visual Studio trump you since he doesn’t use Intellisense? And doesn’t the person who uses plain old Notepad trump him, since he doesn’t come to rely on such decadent conveniences as syntax highlighting and auto-indentation? And isn’t that guy a noob next to the guy who codes without a monitor the way that really, really smart people in movies play chess without looking at the board? And don’t they all pale in comparison to someone who lives in a hut at the North Pole and sends his hand-written assembly code via carrier pigeon to someone who types it into a computer and executes it (he’s so hardcore that he clearly doesn’t need the feedback of running his code)? I mean, isn’t that necessary if you really want to be a minimalist, 10th degree black-belt, Zen Master programmer–to be so productive that you fold reality back on itself and actually produce nothing?

The lesson here is that pain may be gain when it comes to self-growth and status, but it really isn’t when the pain makes you slower and people are paying for your time. Avoiding shortcuts and efficiency so that you can confidently talk about your self reliance not only fails as a value-add, but it’s inherently doomed to failure since there’s always going to be some guy that can come along and trump you in that “disarms race.” Doing without has no intrinsic value unless you can demonstrate that you’re actively being hampered by a tool.

So What’s the Verdict?

I don’t know that I’ve covered any ground-breaking territory except to point out that both sides of this argument have solutions to real but different problems. The minimalists are solving the problem of specious application of rote procedures and lack of self-reliance while the add-in people are solving the problem of automating tedious or repetitive processes. Ironically, both groups have solutions for problems that are fundamental to the programmer condition (avoiding doing things without knowing why and avoiding doing things that could be done better by machines, respectively). It’s just a question of which problem is being solved when and why it’s being solved, and that’s going to be a matter of discretion.

Add-in people, be careful that you don’t become extremely proficient at coding up anti-patterns and doing things that you don’t understand. Minimalist people, recognize that tools that others use and you don’t aren’t necessarily crutches for them. And, above all, have enough respect for one another to realize that what works for some may not work for others. If someone isn’t interested in productivity tools or add-ins and feels more comfortable with a minimalist setup, who are any of us to judge? I’ve been using CodeRush for years, and I would request the same consideration–please don’t assume that I use it as a template for 5,000 line singletons and other means of mindlessly generating crap.

At the end of the day, whether you choose to fight bad guys using only your fists, your feet, and a pair of cut-off sweat shorts or whether you have some crazy suit with all manner of gizmos and gadgets, the only important consideration when all is said and done is the results. You can certainly leave an unconscious and dazed pile of ne’er-do-wells in your wake either way. Metaphorically speaking, that is–it’s probably actually soda cans and pretzel crumbs.

By

An Interface Taxonomy

I generally subscribe to the notion that a huge (and almost universally underrated) part of being a software developer is coming up with good names for things. The importance of naming concepts ranges from the macro-consideration of what to call languages or frameworks (how about “.NET” and “javascript” as preposterously confusing naming-fails) to the application level (I’m looking at you, GIMP for Linux) all the way down to the micro-level of methods and variables:

//u dnt typ lk this if u wnt ppl 2 undrstnd u, do u?
public string Trnstr(string vl2trn)
{
var t_vrbl_apse = GtTrncStrn(vl2trn);
return t_vrbl_apse;
}

So in the interest of clarifying terms and concepts that we discuss, I’d like to suggest a taxonomy of interfaces. As I’m defining them, these terms are not mutually exclusive so a game of “which kind is this” might yield more than one right answer. Also, it almost goes without saying that this is not comprehensive (the only reason I’m saying it is as a disclaimer 🙂 ). I’m really just trying to get the ball rolling here to establish a working, helpful lexicon. If you know of some kind of already-existing taxonomy like this or simply have suggestions for ones I missed, please weigh in with comments.

Characteristic Interfaces

Characteristic interfaces are interfaces used to express runtime capabilities of a type. Examples in C# include ISerializeable, ICloneable, IEnumerable, etc. The author of the class is pulling in some behaviors that are ancillary features of the type rather than the main course. You might have a Customer object that happened to be cloneable and serializable, but those facts wouldn’t be prominently mentioned when you were explaining to someone the purpose of the object. You’re likely to see these as method parameters and return types since this is a favorable alternative to casting for expressing interest in some set of objects with features in common.

Client Interfaces

Client interfaces are used to guide users of your code as to what functionality they need to supply. This might include an interface called IValidateOrders with a Validate() method taking an order parameter, for instance. Your code understands orders and it understands that they need to be validated, but they leave it up to client implementations to supply that validation logic. With the fall of popular inheritance structures and the rise of composition, this has become a much more common way to interact with arbitrary client code.

Documentation Interfaces

This is a type of interface that exists mainly for write-time documentation rather than any kind of build or run-time considerations. You see this in situations where the actual implementation of the interface results in no runtime polymorphism, unit-testing, or flexibility concerns. The interface is defined simply so that other developers will know what methods need to be written if/when they define an implementation or else to provide readability cues by having a quick list of what the class does to the right of a colon next to its name. An example would be having Car : IMoveAround where the interface simply has one method called “Move()”. Obviously, any documentation interface will lose its status as soon as someone introduces runtime polymorphism (e.g. some kind of class factory or configurability with an IoC container), but you might describe it this way up until that point or, perhaps even after, if it was clearly intended for documentation purposes.

Identify Interfaces

Conceptually, these are the opposite of Characteristic interfaces. These are interfaces that an implementing type uses to define itself. Examples might include implementers of IRepository<T>, IFileParser, or INetworkListener. Implementing the interface pretty much wears out the type’s allowance for functionality under the Single Responsibility Principle. Interestingly enough, it’s perfectly reasonable to see a type implement an Identity Interface and a Characteristic Interface (service classes implementing IDisposable come immediately to mind).

Seam Interfaces

A seam interface is one that divides your application in swappable ways. A good example would be a series of repositories that make use of IDataAccess implementations. The dependency of repositories on schemes for accessing data is inverted, allowing easy testing of the repositories and simple runtime configuration of different data access schemes. As a concrete example, an architecture using a seam interface between repository and data access could switch from using a SQL Server database to a flat file structure or a document database by altering only a few lines of wireup code or XML. Seam interfaces provide points at which applications are easy to test and change.

Speculative Interfaces

Speculative interfaces are sort of the dark side of seam interfaces–they’re unrealized seams. In our previous example, the IDataAccess interface would be speculative if there were only one type of persistence and that was going to be the case for the foreseeable future. The interface is still providing a seam for testing, but now the complexity that it introduces is of questionable value since there aren’t multiple implementations and it could be argued that you’re simply introducing complexity in violation of YAGNI. It’s generally easiest to identify speculative intefaces by the naming scheme of the interfaces and their single implementation: Foo and IFoo, LookupService and ILookupService, etc. (This latter naming example is specific to C#–I don’t know exactly how people commonly name speculative interfaces in other languages or if there is a consistent naming scheme at all, absent the C# specific Hungarian notation for interfaces.)

By

10 Terms You Should Know

Appropos of nothing, I’d like to make a very brief post of a list of terms, with links to where you can read more about them. These terms describe concepts that, when understood, I believe can help make you sharper in the way you approach things as a programmer. I don’t mean that learning them will immediately boost your productivity or solve any specific problem that you have, but I do feel that they may give you new ideas for designs or methods for reasoning about your code. So give them a read or at least a glance to brush up or learn something new.

(This list is by no means an attempt to be comprehensive, and please feel free to suggest others in the comments)

  1. Leaky Abstractions
  2. RESTful Architecture
  3. Lambda Expressions
  4. Aspect Oriented Programming (AOP)
  5. Duck Typing
  6. Command Query Responsibility Segregation (CQRS)
  7. NoSQL
  8. Early and Late Binding
  9. Immutability
  10. Cache Coherence

Hopefully there’s at least one relatively new concept in here that you can let percolate in your brain for a while. And who knows, perhaps it will help you consider an alternate solution to some problem you face in the future.

Cheers!

By

Comments: Here’s my Code and I’m Sorry

Heavily Commented Code: The Awful Empathy

Imagine a feeling–that empathy you get when you’re watching someone fail in front of an audience. Perhaps its a comedian relying on awful puns and props or a person in a public speaking course that just freezes after forgetting some lines. Ugh. It’s so awkward that it hurts. It’s almost like you’re up there with them. You start mumbling to yourself, “please, just be funny,” or, “please, just remember your lines.”

BanjoAccordionThe agony of sustained failure in such a situation doesn’t come on all at once. It creeps up on you in a crescendo of awful empathy and becomes memorable as it approaches the crest. But it starts subtly, before you realize it. There are warning signs that the train is unstable before it actually pops off of the rails. A classic warning sign is the “pre-excuse.”

Think of how you feel when someone gets up to talk at a meeting or give a presentation, and they start off with something like, “okay, this isn’t really finished, and Jones was supposed to help, but he got assigned to the Smith contract, and I did things in here that I’m not proud of, and…” With every clause that the speaker tacks onto the mounting lists of reasons that you’re going to hate it, you feel your discomfort mounting–so much so that you may even get preemptively angry or impatient because you know he’s going to bomb and you’re about to feel that tooth-grinding failure-empathy.

Okay. Now that the stage is set and we’re imagining the same feeling, know that this is how I feel when I open code file and see the green of comments (this is the color of comments in all my IDEs) prominently in a file. It’s as though the author of the code is on a stage in front of me, and he’s saying, “okay, so this is probably not very clear, and some of this is actually completely wrong, and changing this would be a nightmare, and you might want a beer or two, heh heh, but really, this will make more sense if you’re drunk, and, you know what, I’m sorry, really, really sorry because this is just, well, it’s just… it’s complete garbage. Sorry.”

That might seem a bit harsh, but think of what you’re looking at when you see code with a comment to actual code ratio approaching 1:1. You’re looking at a situation where someone needed to spend as much space and probably as much time trying to tell you what the code says as writing the code. Why would someone do that? Why would someone write a bunch of code and then write a bunch of English explaining to someone fluent in code what the code does? This is like me sending you an email in Spanish and putting the English equivalent after every sentence. I would do it if one of the two of us didn’t speak Spanish well or at all. And that’s how I feel when I see all those comments–either you don’t speak code very well or you think that I don’t speak code very well. The former occurs a lot with people who program haphazardly by coincidence. (“I better write this down in a comment because I had no idea that’s what an array was for. Who knew?”) The latter generates mind-numbing comments that rot. (“Declares an int called x and initializes it to 6.”) If you aren’t being forced to write comments by some kind of style policy and you’re not Zorro, you’re writing things in English because you’re not bothering to write and illuminate them in Code (I’m using the uppercase to distinguish simply writing some kind of compiling code from writing Code that communicates).

Self-Fulfilling Prophecy

There’s a substantial cross section of the developer world that thinks diligently commenting code is not only a best practice, but also table stakes for basic caring and being good at your craft. As I’ve previously explained, I used to be one of those people. I viewed writing comments in the same way that I view shutting drawers when I’m done using them or making my bed–just grunt work that’s unavoidable in life if you want to be a person that’s organized. Interestingly, I never really viewed them as particularly communicative, and, since adopting TDD and writing tiny methods, I viewed them largely as superfluous except for occasionally explaining some kind of political decision that transcended code (documenting APIs that you’ll release for public consumption are also an exception as this becomes part of your deliverable product). But I started to get increasingly disillusioned with the way comments would look in group code.

I would return to a method that I’d written six months earlier and for which I’d put a very clear doc comment, only to see something like this:

/// Do something with X
/// Never ever ever ever accept a negative value X
/// or millions of the most adorable puppies you've ever seen
/// will be senselessly butchered!!!!!!!
private void DoSomethingWithX(int x)
{
    //if(x < 0)
    // throw new ArgumentException("x", "You monster.");

    int y = x; //No matter what, do not let y be 15!!!!
    y = 15;
    PossibleDoom(y);
    PossibleDoom(x);
}

Holy crap! We're clearly playing Russian Roulette here. Did the requirements change and we're no longer endangering puppies? Is this code causing terrible things to happen? Who wrote that comment about 15? Hopefully not the same person that wrote the next line! And what should this code do--what the various comments say or what it actually does? Do the people responsible even work here anymore?

I'm pretty sure that anyone reading is chuckling and nodding sympathetically right now. You can only return to a method that you've written and see this sort of thing so many times before you come to a few conclusions:

  1. People almost never read your comments or any comments--they're just a step above contract legalese as far as pointless noise in our lives goes.
  2. Even if someone does read your comments, they certainly won't fix them along with the code they're changing.
  3. On a long enough timeline, your comments will all become dirty, confusing lies.
  4. If you want to minimize the degree to which you're a liar, minimize the comments that you write.

Surely this isn't really news to anyone, and it's probably answered with admonishments and mental notes to be more diligent about updating comments that everyone knows won't actually happen. So why is it then considered good form and often mandated to put lies into source control for the later 'benefit' of others? Why do we do this, discounting the bed-making/diligence/good-citizen motivation?

To answer that question, think of the kind of code where you see comments and the kind of code where you don't. If you see a four-line functional method with a single nested loop and no local variables, do you generally see comments in there? Probably not. How about a fifty line method with so many nested control structures that you need some kind of productivity add-in to know if you're scoped inside that else you saw earlier or another if, or maybe a while loop? Bingo--that's where comments go to hang out. Giant methods. Classes with lots of responsibilities and confusing internal state. Cryptically-named local variables. These things are all like cool, dank yards after a storm, sprouting explanatory comments like so many mushrooms. They sit there in mute testimony to the mildew-ridden, fungus-friendly conditions around them.

To put it another way, comments become necessary because the author isn't speaking Code well and punts, using English instead of fixing the code to be clear and expressive. Thus the comments are compensation for a lack of clarity. But they're more than that. They're an implied apology for the code as well. They're an apology for writing code and not Code. They're an apology for the fact that writing code and not Code results in the project being a legacy project before it's even done being written. They're an implied apology for big, lumbering classes, winding methods, confusing state, and other obfuscations of intent. But most of all, they're the preemptive, awkward-empathy-inducing, "hang onto your hat because what I'm doing here is actually nuts" pre-apologies/excuses to anyone with the misfortune of reading the code.

So please, I beg you--next time you find yourself thinking, "dude, nobody's ever going figure this wackiness out unless I spend a few sentences explaining myself," don't bother with the explanation. Bother instead to correct the "nobody's ever going to figure this out" part. Good Code speaks for itself so that you can focus on more important things.