DaedTech

Stories about Software

By

Wherefore Thou Shalt Fail at Software Requirements

On Being Deliberately Unclear

For whatever reason lately, I’ve been drawing a lot of inspiration from Programmers’ Stack Exchange, and today brings another such post. However, unlike other questions, reading this one seriously made me sad. The questions, the answers, its existence… all of it made me sad in the same kind of way that it’s vaguely depressing to think of how many hours of our lives we lose to traffic jams.

I had a conversation with a coworker once that involved a good-natured disagreement over the role of legalese in our lives. My (cynical) contention was that legalese has evolved in purpose–has even self-bastardized over the years–to go from something designed to clarify communication to something that is specifically used to hinder communication. To put it another way, by freezing the language of law in pedantic and flowery period-speak from the 18th century, lawyers have effectively created a throwback to the feudal practice of having a language for the nobles and a different language for the commoners. This language of the lawmaking nobles builds a bit of social job security in a way that writers of inscrutable code have picked up on–if you make it so no one understands your work and yet everyone depends on your work, then they have no choice but to keep you around. (Lawyers also have the benefit of being able to make laws to force you to need lawyers, which never hurts.)

My colleague appropriately pointed out that the nature of jurisprudence is such that it makes sense for lawyers to phrase things in documents in such a way that they can point to decided case law as precedent for what they’re doing. In other words, it’s ironically pragmatic to use language that no one understands because the alternative is that you write it in language less than 200 years old but then have to systematically re-defend every paragraph in various lawsuits later. I conceded this as a fair point, but thought it pointed to a more systemic flaw. It seems to me that any framework for bargaining between two parties that inherently makes communication harder is, ipso facto, something of a failure. He disagreed, being of the opinion that this old language is actually more precise and clear than today’s language. At this point, we agreed to disagree. I’ve since pondered the subject of language rules, language evolution, and precision. (As an aside, this bit of back-story is how the conversation went to the best of my recollection, so the details may be a little rough around the edges. It is not my intention to misrepresent anyone’s positions.)

Requirement Shock and Awe

Because of this pondering, my strange brain immediately linked the SE post with this conversation from some months back. I worked for a company once where the project manager thunderously plopped down some kind of 200-page Word document, in outline form, that contained probably 17,000 eerily and boringly similar statements:

  • 1.17.94.6b The system shall display an icon of color pink at such time as the button of color red button is clicked with the pointing device commonly known as a “mouse.”
  • 1.17.94.6c The system shall display an icon of color pink at such time as the button of color red is clicked with the pointing device commonly known as a “mouse,” 1st planet from Sol, Mercury, spins in retrograde, and the utilizer utilizes the menu option numbering 12.
  • 1.17.94.6d The system shall display an icon of color mauve at such time as the button of color purple is clicked with the pointing device commonly known as a “mouse.”
  • 1.17.94.6e The system shall require utilizers to change their underwear every half hour.
  • 1.17.94.6f The system shall require utilizers to wear underwear on the outside of their pants so that it can check.
  • 1.17.94.6g The system shall guarantee that all children under the age of 16 are now… 16.
  • 1.17.94.6h The system shalt not make graven images.

YeRequirements2A week later, when I asked a clarifying question about some point in a meeting, he furrowed his brow in disappointment and asked if I had read the functional-requirement-elaboration-dohickey-spec-200-pager, to which I replied that I had done my best, but, much like the dictionary, I had failed at complete memorization. I stated furthermore that expecting anyone to gain a good working understanding of the prospective software from this thing was silly since people don’t picture hypothetical software at that level of granularity. And besides, the document will be woefully wrong by the time the actual software is written anyway.

While I was at it with the unpopular opinions, I later told the project manager (who wasn’t the one that wrote this behemoth–just the messenger) that the purpose of this document seemed not to be clarity at all, but a mimicking of legalese. I gathered that the author was interested primarily in creating some kind of “heads I win, tails you lose” situation. This juggernaut of an artifact assured that everyone just kind of winged it with software development, but that when the barrel crashed at the bottom of the Waterfall, the document author could accuse the devs of not fulfilling their end of the bargain. Really, the document was so mind-numbing and confusing as to make understanding it a relative matter of opinion anyway. Having gotten that out in the open, I proceeded to write software for the release that was ahead of schedule and accepted, and I never looked at the intimidating requirements encyclopedia again. I doubt anyone ever did or will if they’re not looking for evidentiary support in some kind of blame game.

Well, time went by, and I filed that bit of silliness under “lol @ waterfall.” But there had been a coworker who was party to this conversation that mentioned something about the RFCs, or at least the notion of common definition, when I had been on my soapbox. That is, “shall” was used because it meant something specific, say, as compared to “will” or “might” or “should.” And because those specific meanings, apparently defined for eternity in the hallowed and dusty annals of the RFC, will live on beyond the English language or even the human race, it is those meanings we shall (should? might? must?) use. I mention these RFCs because the accepted answer to that dreadful Stack Exchange question mentioned them, clearing up once and for all and to all, including Bill Clinton, what the definition of “is” is (at least in the case of future tenses simple, perfect, continuous, and perfect continuous).

The only problem is that the RFCs weren’t the first place that someone had trotted out a bit of self-important window-dressing when it came to simple verb usage. Lawyers had beaten them to the punch:

First, lawyers regularly misuse it to mean something other than “has a duty to.” It has become so corrupted by misuse that it has no firm meaning. Second—and related to the first—it breeds litigation. There are 76 pages in “Words and Phrases” (a legal reference) that summarize hundreds of cases interpreting “shall.” Third, nobody uses “shall” in common speech. It’s one more example of unnecessary lawyer talk. Nobody says, “You shall finish the project in a week.” For all these reasons, “must” is a better choice, and the change has already started to take place. The new Federal Rules of Appellate Procedure, for instance, use “must,” not “shall.”

“Shall” isn’t plain English… But legal drafters use “shall” incessantly. They learn it by osmosis in law school, and the lesson is fortified in law practice.

Ask a drafter what “shall” means, and you’ll hear that it’s a mandatory word—opposed to the permissive “may”. Although this isn’t a lie, it’s a gross inaccuracy… Often, it’s true, “shall” is mandatory… Yet the word frequently bears other meanings—sometimes even masquerading as a synonym of “may”… In just about every jurisdiction, courts have held that “shall” can mean not just “must” and “may”, but also “will” and “is”. Increasingly, official drafting bodies are recognizing the problem… Many… drafters have adopted the “shall-less” style… You should do the same.

Bringing things back full circle, we have lawyers misusing a variant of “to be”–one of the most basic imaginable concepts in the English language–in order to sound more official and to confuse and intimidate readers. We have the world’s middle manager, pointy-haired types wanting to get in on that self-important action and adopting this language to stack the deck in CYA poker. And, mercifully, we have a group defining the new Federal Rules of Appellate Procedure who seems to value clarity over bombastic shows of rhetorical force designed to overpower objections and confuse skeptics. On the whole, we have a whole lot of people using language to impress, confuse, and intimidate, and a stalwart few trying to be precise.

A Failure of Both Clarity and Modernity

But believe it or not, the point of this post isn’t specifically to rag on people for substituting the odd “utilize” for “use,” sports-announcer-style, to try to sound smart. From academia to corporate meetings to dating sites (and courtrooms and requirements documents), that sort of thing is far too widespread for me to take it as a personal crusade. It might make me a hypocrite anyway to try. My point here is a more subtle one than that, and a more subtle one than pointing out the inherent, RFC futility of trying to get the world to agree to some made up definitions of various tense forms of “to be.”

The meat of the issue is that using slightly different variants of common words to emphasize things is a very dated way of defining requirements. Think of this kind of nonsense: “well… shall means that it’s the developer’s responsibility, but shalt means that it’s an act of God, and will means that project management has to help, and optional means optional and blah, blah blah…” What’s really going on here? I’d say what’s happening is some half-baked attempt to define concepts like “priority” and “responsible party” and “dependency.” You know, the kind of things that would do well, in, say, a database as fields. Pointy-haireds come for the semantic quibbling and they stay for the trumped-up buzz speak, so it winds up being harder than it should be to move away from a system where you get to bust out the fine verbal china and say impressive sounding things like “shall.”

The real problem here is that this silliness is a holdover from days when requirements were captured on Word Documents (or typewriters) as long, flowery prose and evaluated as a gestalt. But with modern ALM systems, work items, traceability tools, etc., this is colossal helping of fail. Imagine a requirement phrased as “when a user clicks the submit button, a JSON message is sent to the server.” That string can sit in a database along with a boolean that’s set to true when this accurately describes the world and false when it doesn’t. You want priority? How about an integer instead of bickering over whether it should be “JSON message {shall, will, ought to, might, mayhap, by-Jove-it-had-better, shalt, may, etc.} sent to the server”? When you’re coding, it’s an anti-pattern to parse giant chunks of text in a database field, so why isn’t it considered one to do it during requirements definition?

We have far better tools for capturing and managing requirements than word processors, pointless arguments over misappropriated grammar, standards used to define ARPANET in the 60’s, and polyester bell-bottoms. It’s 2013, so let’s not define and track requirements like it’s 1999… or 1969. Ditch the RFCs, the waterfall, the BS Bingo talk, and the whole nine yards if you want to get things done. But if you’d prefer just to focus on figuring out who to blame when you fail, then you should/shall/will/must/might/shalt not change a thing.

By

Productivity Add-Ins: Bruce Lee vs Batman

In the last few months, I’ve seen a number of tweets and posts decrying or at least cautioning against the use of productivity tools (e.g. CodeRush, ReSharper, and JustCode). The reasoning behind this is is generally some variant of the notion that such tools are more akin to addictive drugs than to sustainable life improvements. Sure, the productivity tool is initially a big help, but soon you’re useless without it. If, on the other hand, you had just stuck to the simple, honest, clean living of regular development, you might not have reached the dizzying highs of the drug, but neither would you have experienced crippling dependency and eventual rock bottom. The response from those who disagree is also common: insistence that use doesn’t equal dependence and the extolling of the virtues of such tools. Predictably, this back and forth usually degenerates into Apple v. Android or Coke v. Pepsi.

Before this degeneration, though, the debate has some fascinating overtones. I’d like to explore those overtones a bit to see if there’s some kind of grudging consensus to be reached on the subject (though I recognize that this is probably futile, given the intense cognitive bias of endowment effect on display when it comes to add-ins and enhancements that developers use). At the end of the day, I think both outlooks are born out of legitimate experience and motivation and offer lessons that can at least lead to deciding how much to automate your process with eyes wide open.

Also, in the interests of full disclosure, I am an enthusiastic CodeRush fan, so my own personal preference falls heavily on the side of productivity tools. And I have in the past plugged for it too, though mainly for the issue static analysis purpose rather than any code generation. That being said, I don’t actually care whether people use these tools or not, nor do I take any personal affront to someone reading that linked post, deciding that I’m full of crap, and continuing to develop sans add-ins.

The Case Against the Tools

There’s an interesting phenomenon that I’ve encountered a number of times in a variety of incarnations. In shops where there is development of one core product for protracted periods of time, you meet workaday devs who have not actually clicked “File->New” (or whatever) and created a new project in months or years. Every morning they come in at 9, every evening they punch out at 5, and they know how to write code, compile it, and run the application, all with plenty of support from a heavyweight IDE like Eclipse or Visual Studio. The phenomenon that I’ve encountered in such situations is that occasionally something breaks or doesn’t work properly, and I’ve said, “oh, well, just compile it manually from the command line,” or, “just kick off the build outside of the IDE.” This elicits a blank stare that communicates quite effectively, “dude, wat–that’s not how it works.”

When I’ve encountered this, I find that I have to then have an awkward conversation with a non-entry-level developer where I explain the OS command line and basic program execution; the fact that the IDE and the compiler are not, in fact, the same thing; and other things that I would have thought were pretty fundamental. So what has gone wrong here? I’d say that the underlying problem is a classic one in this line of work–automation prior to understanding.

AngryArchLet’s say that I work in a truly waterfall shop, and I get tired of having to manually destroy all code when any requirement changes so that I can start over. Watching for Word document revisions to change and then manually deleting the project is a hassle that I’ve lived with for one day too long, so I fire up the plugin project template for my IDE and write something that runs every time I start. This plugin simply checks the requirements document to see if it has been changed and, if so, it deletes the project I’m working on from source control and automatically creates a new, blank one.

Let’s then say this plugin is so successful that I slap it into everyone’s IDE, including new hires. And, as time goes by, some of those new hires drift to other departments and groups, not all of which are quite as pure as we are in their waterfall approach. It isn’t long before some angry architect somewhere storms over, demanding to know why the new guy took it upon himself to delete the entire source control structure and is flabbergasted to hear, “oh, that wasn’t me–that’s just how the IDE works.”

Another very real issue that something like a productivity tool, used unwisely, can create is to facilitate greatly enhanced efficiency at generating terrible code (see “Romance Author“). A common development anti-pattern (in my opinion) that makes me wince is when I see someone say, “I’m sure generating a lot of repetitive code–I should write some code that generates this code en masse.” (This may be reasonable to do in some cases, but often it’s better to revisit the design.) Productivity tools make this much easier and thus more tempting to do.

The lesson here is that automation can lead to lack of understanding and to real problems when the person benefiting doesn’t understand how the automation works or if and why it’s better. This lack of understanding leads to a narrower view of possible approaches. I think a point of agreement between proponents and opponents of tools might be that it’s better to have felt a pain point before adopting the cure for it rather than just gulping down pain medication ‘preventatively’ and lending credence to those saying the add-ins are negatively habit-forming. You shouldn’t download and use some productivity add-in because all of the cool kids are doing it and you don’t want to be left out of the conversations with hashtag #coderush

The Case for the Tools

The argument from the last section takes at face value the genuine concerns of those making it and lends them benefit of the doubt that their issue with productivity tools is truly concern for bad or voodoo automation. And I think that requires a genuine leap of faith. When it comes to add-ins, I’ve noticed a common thread between opponents of that and opponents of unit tests/TDD–often the most vocal and angry opponents are ones that have never tried it. This being the case, the waters become a little bit muddied since we don’t know from case to case if the opponent has consistently eschewed them because he really believes his arguments against them or if he argues against them to justify retroactively not having learned to use them.

And that’s really not a trivial quibble. I can count plenty of detractors that have never used the tools, but what I can’t recall is a single instance of someone saying, “man, I used CodeRush for years and it really made me worse at my job before I kicked the habit.” I can recall (because I’ve said) that it makes it annoying for me to use less sophisticated environments and tooling, but I’d rather the tide rise and lift all of the boats than advocate that everybody use notepad or VI so that we don’t experience feature envy if we switch to something else.

NorthPolePigeonThe attitude that results from “my avoidance of these tools makes me stronger” is the main thing I was referring to earlier in the post when I mentioned “fascinating overtones.” It sets the stage for tools opponents to project a mix of rugged survivalist and Protestant Work Ethic. Metaphorically speaking, the VI users of the world sleep on a bed of brambles because things like beds and not being stabbed while you sleep are for weaklings. Pain is gain. You get the sense that these guys refuse to eat anything that they didn’t either grow themselves or shoot using a homemade bow and arrow fashioned out of something that they grew themselves.

But when it comes to programming (and, more broadly, life, but I’ll leave that for a post in a philosophy blog that I will never start) this affect is prone to reductio ad absurdum. If you win by leaving productivity tools out of your Visual Studio, doesn’t the guy who uses Notepad++ over Visual Studio trump you since he doesn’t use Intellisense? And doesn’t the person who uses plain old Notepad trump him, since he doesn’t come to rely on such decadent conveniences as syntax highlighting and auto-indentation? And isn’t that guy a noob next to the guy who codes without a monitor the way that really, really smart people in movies play chess without looking at the board? And don’t they all pale in comparison to someone who lives in a hut at the North Pole and sends his hand-written assembly code via carrier pigeon to someone who types it into a computer and executes it (he’s so hardcore that he clearly doesn’t need the feedback of running his code)? I mean, isn’t that necessary if you really want to be a minimalist, 10th degree black-belt, Zen Master programmer–to be so productive that you fold reality back on itself and actually produce nothing?

The lesson here is that pain may be gain when it comes to self-growth and status, but it really isn’t when the pain makes you slower and people are paying for your time. Avoiding shortcuts and efficiency so that you can confidently talk about your self reliance not only fails as a value-add, but it’s inherently doomed to failure since there’s always going to be some guy that can come along and trump you in that “disarms race.” Doing without has no intrinsic value unless you can demonstrate that you’re actively being hampered by a tool.

So What’s the Verdict?

I don’t know that I’ve covered any ground-breaking territory except to point out that both sides of this argument have solutions to real but different problems. The minimalists are solving the problem of specious application of rote procedures and lack of self-reliance while the add-in people are solving the problem of automating tedious or repetitive processes. Ironically, both groups have solutions for problems that are fundamental to the programmer condition (avoiding doing things without knowing why and avoiding doing things that could be done better by machines, respectively). It’s just a question of which problem is being solved when and why it’s being solved, and that’s going to be a matter of discretion.

Add-in people, be careful that you don’t become extremely proficient at coding up anti-patterns and doing things that you don’t understand. Minimalist people, recognize that tools that others use and you don’t aren’t necessarily crutches for them. And, above all, have enough respect for one another to realize that what works for some may not work for others. If someone isn’t interested in productivity tools or add-ins and feels more comfortable with a minimalist setup, who are any of us to judge? I’ve been using CodeRush for years, and I would request the same consideration–please don’t assume that I use it as a template for 5,000 line singletons and other means of mindlessly generating crap.

At the end of the day, whether you choose to fight bad guys using only your fists, your feet, and a pair of cut-off sweat shorts or whether you have some crazy suit with all manner of gizmos and gadgets, the only important consideration when all is said and done is the results. You can certainly leave an unconscious and dazed pile of ne’er-do-wells in your wake either way. Metaphorically speaking, that is–it’s probably actually soda cans and pretzel crumbs.

By

An Interface Taxonomy

I generally subscribe to the notion that a huge (and almost universally underrated) part of being a software developer is coming up with good names for things. The importance of naming concepts ranges from the macro-consideration of what to call languages or frameworks (how about “.NET” and “javascript” as preposterously confusing naming-fails) to the application level (I’m looking at you, GIMP for Linux) all the way down to the micro-level of methods and variables:

//u dnt typ lk this if u wnt ppl 2 undrstnd u, do u?
public string Trnstr(string vl2trn)
{
var t_vrbl_apse = GtTrncStrn(vl2trn);
return t_vrbl_apse;
}

So in the interest of clarifying terms and concepts that we discuss, I’d like to suggest a taxonomy of interfaces. As I’m defining them, these terms are not mutually exclusive so a game of “which kind is this” might yield more than one right answer. Also, it almost goes without saying that this is not comprehensive (the only reason I’m saying it is as a disclaimer 🙂 ). I’m really just trying to get the ball rolling here to establish a working, helpful lexicon. If you know of some kind of already-existing taxonomy like this or simply have suggestions for ones I missed, please weigh in with comments.

Characteristic Interfaces

Characteristic interfaces are interfaces used to express runtime capabilities of a type. Examples in C# include ISerializeable, ICloneable, IEnumerable, etc. The author of the class is pulling in some behaviors that are ancillary features of the type rather than the main course. You might have a Customer object that happened to be cloneable and serializable, but those facts wouldn’t be prominently mentioned when you were explaining to someone the purpose of the object. You’re likely to see these as method parameters and return types since this is a favorable alternative to casting for expressing interest in some set of objects with features in common.

Client Interfaces

Client interfaces are used to guide users of your code as to what functionality they need to supply. This might include an interface called IValidateOrders with a Validate() method taking an order parameter, for instance. Your code understands orders and it understands that they need to be validated, but they leave it up to client implementations to supply that validation logic. With the fall of popular inheritance structures and the rise of composition, this has become a much more common way to interact with arbitrary client code.

Documentation Interfaces

This is a type of interface that exists mainly for write-time documentation rather than any kind of build or run-time considerations. You see this in situations where the actual implementation of the interface results in no runtime polymorphism, unit-testing, or flexibility concerns. The interface is defined simply so that other developers will know what methods need to be written if/when they define an implementation or else to provide readability cues by having a quick list of what the class does to the right of a colon next to its name. An example would be having Car : IMoveAround where the interface simply has one method called “Move()”. Obviously, any documentation interface will lose its status as soon as someone introduces runtime polymorphism (e.g. some kind of class factory or configurability with an IoC container), but you might describe it this way up until that point or, perhaps even after, if it was clearly intended for documentation purposes.

Identify Interfaces

Conceptually, these are the opposite of Characteristic interfaces. These are interfaces that an implementing type uses to define itself. Examples might include implementers of IRepository<T>, IFileParser, or INetworkListener. Implementing the interface pretty much wears out the type’s allowance for functionality under the Single Responsibility Principle. Interestingly enough, it’s perfectly reasonable to see a type implement an Identity Interface and a Characteristic Interface (service classes implementing IDisposable come immediately to mind).

Seam Interfaces

A seam interface is one that divides your application in swappable ways. A good example would be a series of repositories that make use of IDataAccess implementations. The dependency of repositories on schemes for accessing data is inverted, allowing easy testing of the repositories and simple runtime configuration of different data access schemes. As a concrete example, an architecture using a seam interface between repository and data access could switch from using a SQL Server database to a flat file structure or a document database by altering only a few lines of wireup code or XML. Seam interfaces provide points at which applications are easy to test and change.

Speculative Interfaces

Speculative interfaces are sort of the dark side of seam interfaces–they’re unrealized seams. In our previous example, the IDataAccess interface would be speculative if there were only one type of persistence and that was going to be the case for the foreseeable future. The interface is still providing a seam for testing, but now the complexity that it introduces is of questionable value since there aren’t multiple implementations and it could be argued that you’re simply introducing complexity in violation of YAGNI. It’s generally easiest to identify speculative intefaces by the naming scheme of the interfaces and their single implementation: Foo and IFoo, LookupService and ILookupService, etc. (This latter naming example is specific to C#–I don’t know exactly how people commonly name speculative interfaces in other languages or if there is a consistent naming scheme at all, absent the C# specific Hungarian notation for interfaces.)

By

10 Terms You Should Know

Appropos of nothing, I’d like to make a very brief post of a list of terms, with links to where you can read more about them. These terms describe concepts that, when understood, I believe can help make you sharper in the way you approach things as a programmer. I don’t mean that learning them will immediately boost your productivity or solve any specific problem that you have, but I do feel that they may give you new ideas for designs or methods for reasoning about your code. So give them a read or at least a glance to brush up or learn something new.

(This list is by no means an attempt to be comprehensive, and please feel free to suggest others in the comments)

  1. Leaky Abstractions
  2. RESTful Architecture
  3. Lambda Expressions
  4. Aspect Oriented Programming (AOP)
  5. Duck Typing
  6. Command Query Responsibility Segregation (CQRS)
  7. NoSQL
  8. Early and Late Binding
  9. Immutability
  10. Cache Coherence

Hopefully there’s at least one relatively new concept in here that you can let percolate in your brain for a while. And who knows, perhaps it will help you consider an alternate solution to some problem you face in the future.

Cheers!

By

Write Once, Confuse Everywhere

Not too long ago, someone asked me to take a look at a relatively simple application designed with the notion of having some future flexibility in mind. I poked around a bit to see what was where and found a reasonably simple and straightforward application. Like any, it had its warts, but there were no serious surprises–no whiplash-inducing double-takes or WTFs. One thing that I did notice, however, was that there were a bunch of classes that inherited from GUI components and had no implementations at all. So instead of using a TextBox, you might use a “DoNothingTextBox” with class definition DoNothingTextBox : TextBox and no implementation. (It was not actually called “DoNothingTextBox”–that’s just for illustration purposes.)

ConfusedI puzzled over the purpose of these things for a while until I inspected a few more and saw one with some code in it. I saw the purpose then immediately: hooks for future functionality. So if, for example, it were later decided at some point that all TextBoxes should disallow typing of non-alphanumeric characters, the behavior could be implemented in one place and work everywhere.

On the one hand, this does seem like a clever bit of future-proofing. If experience tells you that a common need will be to make every instance of X do Y, then it stands to reason you should make it easy and minimally invasive to touch every X. On the other hand, you’re quite possibly running afoul of YAGNI and definitely running afoul of the Open/Closed Principle by creating classes that you may never use with the specific intention of modifying them later. Also to consider is that this creates a weird flipping of what’s commonly expected with inheritance; rather than creating a derived class to extend existing functionality when needed, you speculatively create an ancestor for that purpose.

Is this a problem? Well, let’s think about how we would actually achieve this “change everything.” With the normal pattern of abstracting common functionality into a base class for reuse, the mechanism for the reuse is derived classes deferring to parent’s centralized implementation. For instance:

public class Bird
{
public virtual void Reproduce()
{
Console.Write("I laid an egg!");
}
}

public class Ostrich : Bird
{
public override void Reproduce()
{
base.Reproduce();
Console.Write("... and it was extremely large as far as bird eggs go.");
}
}

public class Parrot : Bird
{

}

Notice that since all birds reproduce by laying eggs, that functionality has been abstracted into a base class where it can be used everywhere and need not be duplicated. If necessary, it can be extended as in the case of Ostrich, or even overridden (which Ostrich could do by omitting the base class call), but the default is what Parrot does: simply use the common Reproduce() method. The bird classes have a default behavior that they can extend or opt out of, assuming that the base class is defined and established at the time that another bird class extends it.

Aha! This is an interesting distinction. The most common scenario for inheritance schemes is one where (1) the base class is defined first or (2) some duplicated piece of functionality across inheritors is moved up into a base class because it’s identical. In either case, the common functionality predates the inheritance relationship. Birds lay eggs, and part of deciding to be a bird (in the sense of writing a class that derives from Bird) is that default egg-laying behavior.

But let’s go back to our speculative text boxes. What does that look like?


public class DoNothingTextBox : TextBox
{

}

Now let’s say that time goes by and the developers all diligently stick to the architectural ‘pattern’ of using DoNothingTextBox everywhere. Life is good. But one day, some project management stakeholder tells one of the developers more on the UI side of things that all of the text boxes in the application should be green. The developer ponders that for a bit and then says, “I know!”

public class DoNothingTextBox : TextBox
{
public DoNothingTextBox() : base()
{
BackColor = Color.Green;
}
}

“Done and done.” He builds, observes that all text boxes are now green, checks in his changes, and takes off for the day to celebrate a job well done. Meanwhile, a handful of other developers on the team are updating to the latest version of the code to get an unrelated change they need. Each of them pulls in the latest, develops for a bit, launches the app to check out their changes, and, “what the… why is this text box green–how did I manage that?” Their troubleshooting progression probably starts with rolling back various changes. Then it winds its way into the version history of CSS files and styling mechanisms; stumbles into looking at the ASP markup and functionality of the various collaborators and controls on the web control/page; and, only after a great deal of frustration, cursing, and hair-tearing, winds up in the dusty, old, forgotten, formerly-empty DoNothing class.

I submit that the problem here is a subtle but profound one. As I’ve mentioned before, inheritance, by its nature, extends and sometimes modifies existing functionality. But this framework for building out and expanding software relies upon the fact that base classes are inherently more fixed and stable than child inheritors and that the relationship between child classes and base classes is well-defined and fixed at the time of inheritance/extension. To put it more concretely, OO developers will intuitively understand a how to use a “base” bird that lays eggs–they won’t intuitively understand how to use a “base” bird that does nothing and then later, spontaneously turns every bird on earth green. Changes to a base class are violent and confusing when they alter the behavior of inheritors while leaving the inheritors untouched.

So I’d encourage you to be extremely careful with using speculative inheritance structures. The instinct to create designs where potential sweeping changes can be implemented easily is an excellent one, but not all responses to that instinct are equally beneficial. Making a one line code change is certainly quick and creates a minimum of code upheaval for the moment, but once all of the time, effort, and hacking around during troubleshooting by other developers is factored in, the return isn’t quite so good anymore. Preserve that instinct, but channel it into better design decisions. It’s just as important to consider how broadly confusing or unintuitive a change will be as it is to consider how many lines of code it takes and how many files need to be touched.