DaedTech

Stories about Software

By

Visualization Mnemonics for Software Principles

Whether it’s because you want to be able to participate in software engineering discussions without having to surreptitiously look things up on your phone, or whether it’s because you have an interview coming up with a firm that wants you to be some kind of expert in OOP or something, you probably have at least some desire to be knowledgeable about development terms. This is probably doubly true of you, since ipso facto, you read blogs about software.

Toward that end, I’m writing this post. My goal is to provide you with a series of somewhat vivid ways to remember software concepts so that you’ll have a fighting chance at remembering what they’re about sometime later. I’m going to do this by telling a series of stories. So, I’ll get right to it.

Law of Demeter

Last week I was on a driving trip and I stopped by a gas station to get myself a Mountain Dew for the sake or road alertness. I grabbed the soda from the cooler and plopped it down on the counter, prompting the clerk to say, “that’ll be $1.95.” At this point, naturally, I removed my pants and the guy started screaming at me about police and indecent exposure. Confused, I said, “look, I’m just trying to pay you — I’ll hand you my pants and you go rummaging around in my pockets until you find my wallet, which you’ll take out and go looking through for cash. If I’m due change put it back into the wallet, unless it’s a coin, and then just put it in my pocket, and give me back the pants.” He pulled a shotgun out from behind the counter and told me that in his store, people obey the Law of Demeter or else.

PantlessLawOfDemeter

So what does the Law of Demeter say? Well, anecdotally, it says “give collaborators exactly what they’re asking for and don’t give them something they’ll have to go picking through to get what they want.” There’s a reason we don’t hand the clerk our pants (or even our wallet) at the store and just hand them money instead; it’s inappropriate to send them hunting for the money. The Law of Demeter encourages you to think this way about your code. Don’t return Pants and force clients of your method to get what they want by invoking Pants.Pockets[1].Wallet.Money — just give them a Money. And, if you’re the clerk, don’t accept someone handing you a Pants and you going to look for the money — demand the money or show them your shotgun.

Single Responsibility Principle

My girlfriend and I recently bought an investment property a couple of hours away. It’s a little house on a lake that was built in the 1950’s and, while cozy and pleasant, it doesn’t have all of the modern amenities that I might want, resulting in a series of home improvement projects to re-tile floors, build some things out and knock some things down. That kind of stuff.

One such project was installing a garbage disposal, which has two components: plumbing and electrical. The plumbing part is pretty straightforward in that you just need to remove the existing drain pipe and insert the disposal between the drain and the drainage pipe. The electrical is a little more interesting in that you need to run wiring from a switch to the disposal so that you can turn it on and off. Now, naturally, I didn’t want to go to all the hubub of creating a whole different switch, so I decided just to use one that was already there. The front patio light switch had the responsibility for turning the front patio light on and off, but I added a little to its burden, asking it also to control the garbage disposal.

That’s worked pretty well. So far the only mishap occurred when I was rinsing off some dishes and dropped a spoon in the drain while, at the same time, my girlfriend turned the front light on for visitors we were expecting. Luckily, I had only a minor scrape and a mangled spoon, and that’s a small price to pay to avoid creating a whole new light switch. And really, what’s the worst that could happen?

Well, I think you know the worst thing that could happen is that someone loses a hand due to this absurd design. When you run afoul of the Single Responsibility Principle, which could loosely be described as saying “do one thing only and do that thing well” or “have only one reason to change.” In my house, we have two reasons to change the state of the switch: turning on the disposal and turning on the light, and this creates an obvious problem. The parallel situation in code is true as well. If you have a class that needs to be changed whenever schema updates occur and whenever GUI changes occur, then you have a class that serves two masters and the possibility for changes to one thing to affect the other. Disk space is cheap and classes/namespaces/modules are renewable resources. When in doubt, create another one.

Open/Closed Principle

I don’t have a ton of time for TV these days, and that’s mainly because TV is so time consuming. It was a lot simpler when I just had a TV that got an analog signal over the air. But then, things went digital, so I had to take apart my TV and rewire it to handle digital signals. Next, we got cable and, of course, there I am, disassembling the TV again so that we can wire it up to get a cable signal. The worst part of that was that when I became furious with the cable provider and we switched to Dish, I was right back to work on the TV. Now, we have a Nintendo Wii, a DVD player, and a Roku, but who has the time to take the television apart and rewire it to handle each of these additional items? And if that weren’t bad enough, I tried hooking up an old school Sega Genesis last year, and my Dish stopped working.

… said no one, ever. And the reason no one has ever said this is that televisions that you purchase follow the Open/Closed Principle, which basically says that you should create components that are closed to modification, but open for extension. Televisions you purchased aren’t made to be disassembled by you, and certainly no one expects you to hack into the guts of the TV just to plug some device into it. That’s what the Coax/RCA/Component/HDMI/etc feeds are for. With the inputs and the sealed-under-warranty case, your television is open for extension, but closed for modification. You can extend its functionality by plugging anything you like into it, including things not even out yet, like an X-Box 12 or something. Follow this same concept for flexible code. When you write code, you strive to maximize flexibility by facilitating maintenance via extension and new code. If you program to interfaces or allow overriding of behavior via inheritance, life is a lot easier when it comes time to change functionality. So favor that over writing some juggernaut class that you have to go in and modify literally every sprint. That’s icky, and you’ll learn to hate that class and the design around it the same way you’d hate the television I just described.

Liskov Substitution Principle

I’m someone that usually eats a pretty unremarkable salad with dinner. You know, standard stuff: lettuce, tomatoes, crutons, scallions, carrots, and hemlock. One thing that I seem to do differently than most, however, is that I examine each individual item in the salad to see whether or not it will kill me before I put it into my mouth (a lot of other salad consumers seem to play pretty fast and loose with their lives, sheesh). I have a pretty simple algorithm for this. If the item is not hemlock, I eat it. If it is hemlock, I put it onto my plate to throw out later. I highly recommend eating your hemlock salad this way.

Or, you could bear in mind the Liskov Substitution Principle, which basically says that if you’re going to have an inheritance relationship, then derived types should be seamlessly swappable for their base type. So, if I have a salad full of Edibles, I shouldn’t have some derived type, Hemlock, that doesn’t behave the way other Edibles do. Another way to think of this is that if you have a heterogeneous collection of things in an inheritance hierarchy, you shouldn’t go through them one by one and say, “let’s see which type this is and treat it specially.” So, obey the LSP and don’t make hemlock salads for people. You’ll have cleaner code and avoid jail.

Interface Segregation Principle

Thank goodness for web page caching — it’s a life saver. Whenever I go to my favorite dictionary site, expertbeginnerdictionary.com (not a real site if you were thinking of trying it), it prompts me for a word to lookup and, when I type in the word and hit enter, it sends me the dictionary over HTTP, at which time I can search the page text with Ctrl-F to find my word. It takes such a long time for my browser to load the entire English dictionary that I’d be really up a creek without page caching. The only trouble is, whenever a word changes and the cache is invalidated, my next lookup takes forever while the browser re-downloads the dictionary. If only there were a better way…

… and there is. Don’t give me the entire dictionary when I want to look up a word. Just give me that word. If I want to know what “zebra” means, I don’t care what “aardvark” means, and my zebra lookup experience shouldn’t be affected and put at risk by changes to “aardvark.” I should only be depending on the words and definitions that I actually use, rather than the entire dictionary. Likewise, if you’re defining public interfaces in your code for clients, break them into minimum composable segments and let your clients assemble them as needed, rather than forcing the kitchen sink (or dictionary) on them.  The Interface Segregation Principle says that clients of an interface shouldn’t be forced to depend on methods that they don’t use because of the excess, pointless baggage that comes along.  Give clients the minimum that they need.

Dependency Inversion Principle

Have you ever been to an automobile factory?  It’s amazing to watch how these things are made.  They start with a car, and the car assembles its own engine, seats, steering wheel, etc.  It’s pretty amazing to watch.  And, for a real treat, you can watch these sub-parts assemble their own internals.  The engine builds its own alternator, battery, transmission, etc — a breathtaking feat of engineering.  Of course, there’s a downside to everything, and, as cool as this is, it can be frustrating that the people in the factory have no control over what kind of engine the car builds for itself.  All they can do is say, “I want a car” and the car does the rest.

I bet you can picture the code base I’m describing.  A long time ago, I went into detail about this piece of imagery, but I’ll summarize by saying that this is “command and control” programming where constructors of objects instantiate all of the object’s dependencies — FooService instantiates its own logger.  This runs afoul of the Dependency Inversion Principle, which holds that high level modules, like Car, should not depend directly on lower level modules, like Engine, but rather that both should depend on an abstraction of the Car-Engine interaction.  This allows the car and the engine to vary independently meaning that our automobile factory workers actually could have control over which engines go in which cars.  And, as described in the linked post, a code base making heavy use of the Dependency Inversion Principle tends to be composable whereas a command and control style code base is not, favoring instead the “car, build thyself” approach.  So to remember and understand Dependency Inversion principle ask yourself who should control what parts go in your car — the people building the car, or the car itself?  Only one of those ideas is preposterous.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

Starting to Unit Test: Not as Hard as You’d Think

I happened to read a post by Dror Helper the other day, in which he said:

I believe TDD and “unit tests” has been done a great injustice by not giving it a cooler name – preferable one that doesn’t have the word “test” in it – because it’s a PR disaster!

Wow, that had never occurred to me, and yet he’s spot on. “Unit test” is a wretched name for anything. It seems somehow to combine all the worst elements of propagating uncertainty in arithmetic with lab measurements and double checking all of your answers on a scantron exam. I just bored myself half to death typing that sentence, so it’s little wonder that the concept of a “unit test” is often met with a visceral “ugh.” I mean, we could at the least call the test suite a “verification checklist” or something, implying that you’re marking progress as you complete things. It’s not exactly a skydiving trip, but it has to beat “unit test.” But I digress.

The lack of appeal of the name, the feeling of already being pressed for time without taking something else on, and the natural resistance to the unknown all create barriers to entry when it comes to unit testing. In order to get started yourself, or especially to convince those around you to do the same, it’s necessary to overcome those barriers. I have a good bit of experience with this in a variety of environments and from a variety of roles. Over the last year, I did a string of blog posts that were essentially my talking scripts for a series of power point/demo talks I gave. These were meant to be an introduction to unit testing.

The series actually became pretty long, and as I was finishing it, I decided to put it into E-Book format. So, with the help of my editor, we turned it into fluid book, and with the help my publisher, we published it in all major E-Book formats for a cost of $4.99. Here is the book on the publisher’s site, and here is a link directly to Amazon
for Kindle readers (full disclosure: I’m experimenting with affiliate links, and that’s why I’m specifically linking Amazon directly).

The title of the book is “Starting to Unit Test: Not as Hard as You Think,” and I feel that the title really captures it. My goal was to trade practice purity for reducing the barriers to entry. In other words, the message was, “don’t feel like you have to start full bore with 100% coverage, TDD, and anything less is a failure — if you just introduce a few tests into a few places in your code, consider that a win.” I also did something else that I haven’t seen others do as much, which is to explain that some types of frameworks and code present unit testing nightmares, and that newbie unit testers should avoid them until they reach a higher belt.” What I’d like to see people take away from this book is a feeling of satisfaction from experiencing a sequence of small but real wins.

For you mythology buffs, this is Sysiphus actually making it to the top of the hill.

For you mythology buffs, this is Sysiphus actually making it to the top of the hill.

By and large, you could get this content by reading through my blog posts on the subject, but if you want it in a compact format, here it is. Not to mention, if you’re trying to sell your team on the merits of unit testing, a book is probably going to have more cachet than a series of blog posts, and the one link to the book is going to be easier to distribute than the giant collection of links for the individual posts/chapters. So get it if you’re interested, and encourage your team to get it if you’re trying to introduce the concept, especially since I close out the book by making a business case for unit testing (this making cases for best practices is actually going to be a theme of mine in the future).

Enjoy, and thanks for reading (the blog and, hopefully, the book)!

By

Encapsulation vs Inversion of Control

This is a post that I’ve had in my drafts folder for nearly two years. Well, I should say I’ve had the title and a few haphazard notes in my drafts folder for nearly two years; I’m writing the actual post right now. The reason I’ve had it sitting around for so long is twofold: (1) it’s sort of a subjective, tricky topic and (2) I’ve struggled to find a concrete stand to take. But, I think it’s relatively important, so I’ve always vowed to circle back to it and, well, here we are.

The draft was first conceived when I’d given a presentation about testability and inversion of control — specifically, using dependency injection via constructors and setters to achieve these ends. I talked, among other things about the Open/Closed Principle, and how this allows modifications to system behavior that favor adding over editing code. The idea here is that we can achieve new functionality with a minimum of violence to the code base and in a way for which it is easy to write unit tests. Everyone wins, right?

Well, not everyone was drinking the Kool-Aid. I fielded a question about encapsulation that, at the time, I hadn’t prepared to answer. “Doesn’t this completely violate encapsulation?” I was a little poleaxed, and sputtered out an answer off the cuff, saying basically, “well, not completely….” I mean, if you’ve written a class that takes ILogger in its constructor and uses it for logging, I control the logger implementation but you control when and how it is used. So, you encapsulate the logger’s usage but not its implementation and this stands in contrast to what would happen if you instantiated your own logger or implemented it yourself — you would encapsulate everything and nothing would be up to me as a client of your code. Certainly, you have more encapsulation. So, I finished: “…not completely…. but who cares?!?” And that was the end of the discussion since we were out of time anyway.

koolaid

I was never really satisfied with that answer but, as Creedence Clearwater says, “time and tears went by, and I collected dust.” When I thought back to that conversation, I would think to myself that encapsulation was passe in the same way that deep inheritance hierarchies were passe. I mean, sure, I learned that encapsulation was one of the four cornerstone principles of OOP, but so is inheritance, and that’s kind of going away with “favor composition over inheritance.” So, hey, “favor dependency injection over encapsulation.” Right? Still, I didn’t find this entirely satisfying — just good enough not to really occupy much of a place in my mind.

But then I remember a bit of a brouhaha last year over a Stack Overflow question. The question itself wasn’t especially remarkable (and was relatively quickly closed), but compiler author and programming legend Eric Lippert dropped by to say “DI is basically a bad idea.” To elaborate, he said:

There is no killer argument for DI because DI is basically a bad idea. The idea of DI is that you take what ought to be implementation details of a class and then allow the user of the class to determine those implementation details. This means that the author of the class no longer has control over the correctness or performance or reliability of the class; that control is put into the hands of the caller, who does not know enough about the internal implementation details of the class to make a good choice.

I was floored. Here we have one of the key authors of the C# compiler saying that the “D” in the SOLID principles was a “bad idea.” I would have dismissed it as blasphemy if (1) I were the sort to adopt approaches based on dogma and (2) he hadn’t helped author at least 4 more compilers than I have. And, while I didn’t suddenly rip the IoC containers out of my projects and instantiate everything inside constructors, I did revisit this topic in terms of my thoughts.

Maybe encapsulation, in the information hiding sense, isn’t so passe. And maybe DI isn’t a magic bullet. But why not? What’s wrong with the author of a class ceding control over some aspects of its behavior by allowing collaboration? And, isn’t any method parameter technically a form of DI, if we’re going to be pedantic about it?

The more I thought about it, the more I started to see competing and interesting use cases. Or, I should say, the more I started to think what class authors are telling their collaborators by using each of these techniques:

Encapsulation: “Don’t worry — I got this.”
Dependency Injection: “Don’t worry — if this doesn’t work, you can always change it.”

So, let’s say that you’re writing a Mars Rover or maybe a compiler or something. The attitude that you’re going to bring to that project is one in which correctness, performance and reliability are all incredibly important because you have to get it right and there’s little room for error. As such, you’re likely going to adopt implementation preference of “I’m going to make absolutely sure that nothing can go wrong with my code.”

But let’s say you’re writing a line of business app for Initrode Inc and the main project stakeholder is fickle, scatterbrained, and indecisive. Then you’re going to have an attitude in which ease and rapidity of system changes is incredibly important because you have to change it fast. As such, you’re likely to adopt an implementation preference of “I’m going to make absolutely sure that changing this without blowing everything up is easy.”

There’s bound to be somewhat of an inverse relationship between flexibility and correctness. As a classic example, a common criticism of Apple’s “walled garden” approach was that it was so rigid, while a common praise of the same was how well it worked. So I guess my take-away from this is that Dependency Injection and, more broadly, Inversion of Control, is not automatically desirable, but I also don’t think I can get to Eric’s take that it’s “basically a bad idea,” either. It’s simply an exchange of “more likely to be correct now” for “more likely to be correct later.” And in the generally agile world in which I live and with the kind of applications that I write, “later” tends to give more value.

Uncle Bob Martin once said, I believe in his Clean Coders video series, that (paraphrased) the second most important characteristic of good software is that it meet the customer requirements. The most important characteristic is that it be easy to change. Reason being, if a system is correct today but rigid, it will be wrong tomorrow when the customer wants changes. If the system is wrong today but flexible, it’s easy to make it right tomorrow. It may not be perfect, but I like DI because I need to be right tomorrow.

By

Implied Acceptance Criteria

I’m on a Scrum team these days, serving as Product Owner, and I was watching developers do functionality demos. This has generally been of the form of them walking me through the happy path, and then me taking it for a test drive and trying to put myself in a user, ‘cleverly’ typing “twenty five” into a text box that clearly wants an integer to see what happens.

Yellow screen of death? Generic error message? Scornful validation message such as “dude, what’s wrong with you?” Helpful validation message such as “your response for how many children you have cannot be a negative number, a decimal, or typeset nonsense?”

Universal Acceptance Criteria?

If what happens isn’t what I think should happen, it’s then off to check out the acceptance criteria/tests. Did it say anything in there about a validation message? Should it have to? Are there things that ought to be “universal acceptance criteria?”

To me, the answer to that question is, “sort of, but I’d prefer to think of them more as default/implied ACs.” And so I started to enumerate them on an internal wiki, kind of as an exercise for myself to see if there were certain things that I think should always be true. Maybe these are like the equivalent of code contracts, but contracts between the developer and the user. Here are some ideas that I came up with:

  • When user supplies invalid input, a message should be shown explaining why the input was not accepted.
  • The way to get back to where you just were should always be available and obvious (e.g. “cancel” button or “back” link or something).
  • Nothing that happens should ever result in a “yellow screen” or whatever equivalent indicates to the user that whatever is going wrong is something you never dreamed of (for production release, anyway — I have no issues with crashes like this in internal or beta tests as part of a “fail early” strategy)
  • If an exception occurs that the user would understand, it is explained to the user (e.g. “The connection to the database was lost.”)
  • There is never a point during the normal course of usage where the user thinks, “should I just, like, wait, or is it frozen?”
  • If it’s possible to prevent the user from doing the wrong thing, the user is prevented from doing the wrong thing (e.g. if user clicks “save” and that’s a long running operation, “save” button is disabled until it’s okay to click again)

Explaining Implied/Default Acceptance Criteria

These are some things that I think ought to be true of your application’s behavior unless there is a good reason to deviate. These are things that I generally think of when I’m working, even if nothing is spelled out explicitly. I think of these and I think of more, depending on the application context (e.g. is this something that should be permission controlled or is this something where the verbiage might be changed later?)

Over the years, I’ve developed a pretty lengthy mental checklist for what I consider good application experience, even without consciously realizing that I’ve done so for the most part. But I think it’s important for us to try to grow this evoked set in our minds as we go.

So what about you? Do you have anything you’d add to this list — any glaring omissions? I’m actually looking to tabulate some of these things into a working document and perhaps expand it to cover other kinds of minimal checklists for various scenarios (such as testing your own code via running the application, finding and fixing a bug, etc). Perhaps when I get it more fleshed out at some point, I’ll revisit and post again, but either way, I’d be interested to hear what you consider to be table stakes/minimum standards for how your applications interact with users.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

Why I Don’t Inherit from Collection Types

In one of the recent posts in the TDD chess series, I was asked a question that made me sort of stop and muse a little after it was asked. It had to do with whether to define a custom collection type or simply use an out of the box one and my thinking went, “well, I’d use an out of the box one unless there were some compelling reason to build my own.” This got me to thinking about something that had gone the way of the Dodo in my coding, which was to build collection types by inheriting from existing collections or else implementing collection interfaces. I used to do that and now I don’t do that anymore. At some point, it simply stopped happening. But why?

I never read some blog post or book that convinced me this was stupid, nor do I have any recollection of getting burned by this practice. There was no great moment of falling out — just a long-ago cessation that wasn’t a conscious decision, sort of like the point in your 20’s when it stops being a given that you’re going to be out until all hours on a Saturday night. I pondered this here and there for a week or so, and then I figured it out.

TDD happened. Er, well, I started practicing TDD as a matter of course, and, in the time since I started doing that, I never inherited from a collection type. Interesting. So, what’s the relationship?

Well, simply put, the relationship is that inheriting from a collection type just never seems to be the simplest way to get some test passing, and it’s never a compelling refactoring (if I were defining some kind of API library for developers and a custom collection type were a reasonable end-product, I would write one, but not for my own development sake). Inheriting from a collection type is, in fact, a rather speculative piece of coding. You’re defining some type that you want to use and saying “you know, it’d probably be handy if I could also do X, Y, and Z to it later.” But I’ve written my opinion about this type of activity.

Doing this also largely increases the surface area of your class. If I have a type that encapsulates some collection of things and I want to give clients the ability to access these things by a property such as their “Id” then I define a GetById(int) method (or whatever). But if I then say “well, you know what, maybe they want to access these things by position or iterate over them or something, so I’ll just inherit from List of Whatever, and give them all that goodness. But yikes! Now I’m responsible for maintaining a concept of ordering, handling the case of out of bounds indexing, and all sorts of other unpleasantness. So, I can either punt and delegate to the methods on the type that I’m wrapping or else I can get rid of the encapsulation altogether and just cobble additional functionality onto List.

But that’s an icky choice. I’m either coughing up all sorts of extra surface area and delegating it to something I’m wrapping (ugly design) or I’m kind of weirdly violating the SRP by extending some collection type to do domain sorts of things. And I’m doing this counter to what my TDD approach would bubble to the surface anyway. I’d never write speculative code at all. Until some client of mine were using something besides GetById(int), GetById(int) is all it’s getting.

This has led me to wonder if, perhaps, there’s some relationship between the rise of TDD and decoupled design and the adage to favor composition over inheritance. Deep inheritance hierarchies across namespaces, assemblies, and even authors, creates the kind of indirection that makes intention muddy and testing (especially TDD) a chore. I want a class that encapsulates details and provides behavior because I’m hitting that public API with tests to define behavior. I don’t want some class that’s a List for the most part but with other stuff tacked on — I don’t want to unit test the List class from the library at all.

It’s interesting speculation, but at the end of the day, the reason I don’t inherit from collection types is a derivative of the fact that I think doing so is awkward and I’m leerier and leerier of inheritance as time goes on. I really don’t do it because I never find it helpful to do so, tautological as that may seem. Perhaps if you’re like me and you just stop thinking of it as a tool in your tool chest, you won’t miss it.