DaedTech

Stories about Software

By

Recall, Retrieval, and the Scientific Method

Improving Readability with Small Things

In my series on building a Chess game using TDD I’ve defined a value type called BoardCoordinate that I introduced instead of passing around X and Y coordinate integer primitives everywhere. It’s a simple enough construct:

public struct BoardCoordinate
{
    private readonly int _x;
    public int X { get { return _x; } }

    private readonly int _y;
    public int Y { get { return _y; } }

    public BoardCoordinate(int x, int y)
    {
        _x = x;
        _y = y;
    }

    public bool IsCoordinateValidForBoardSize(int boardSize)
    {
        return IsDimensionValidForBoardSize(X, boardSize) && IsDimensionValidForBoardSize(Y, boardSize);
    }

    private static bool IsDimensionValidForBoardSize(int dimensionValue, int boardSize)
    {
        return dimensionValue > 0 && dimensionValue <= boardSize;
    }
}

This was a win early on the series to get me away from a trend toward Primitive Obsession, and I haven't really revisited it since. However, I've found myself in the series starting to think that I want a semantically intuitive way to express equality among BoardCoordinates. Here's why:

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Returns_1_2_For_1_1()
{
    Assert.IsTrue(MovesFrom11.Any(bc => bc.X == 1 && bc.Y == 2));
}

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Returns_2_2_For_1_1()
{
    Assert.IsTrue(MovesFrom11.Any(bc => bc.X == 2 && bc.Y == 2));
}

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Returns_3_3_For_1_1()
{
    Assert.IsTrue(MovesFrom11.Any(bc => bc.X == 3 && bc.Y == 3));
}

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Does_Not_Return_0_0_From_1_1()
{
    Assert.IsFalse(MovesFrom11.Any(bc => bc.X == 0 || bc.Y == 0));
}

This is a series of unit tests of the "Queen" class that represents, not surprisingly, the Queen piece in chess. The definition of "MovesFrom11" is elided, but it's a collection of BoardCoordinate that represents the possible moves a queen has from piece 1, 1 on the chess board.

This series of tests was my TDD footprint for driving the functionality of determining the queen's moves. So, I started out saying that she should be able to move from (1,1) to (1,2), then had her also able to move to (2,2), etc. If you read the test, what I'm doing is saying that this collection of BoardCoordinates to which she can move should have in it one that has X coordinate of 1 and Y coordinate of 2, for instance.

What I don't like here and am making mental note to change is this "and". That's not as clear as it could be. I don't want to say, "there should be a coordinate in this collection with X property of such and such and Y property of such and such." I want to say, "the collection should contain this coordinate." This may seem like a small semantic difference, but I value readability to the utmost. And readability is a journey, not a destination -- the more you practice it, the more naturally you'll write readable code. So, I never let my foot off the gas.

During the course of the series, this nagging readability hiccup has caused me to note and refer to a TODO of implementing some kind of concept of equals. In the latest post, Sten asks in the comments, referring to my desire to implement equals, "isn't that unnecessary since structs that doesn't contain reference type members does a byte-by-byte in-memory comparison as default Equals implementation?" It is this question I'd like to address here in this post.

Not directly, mind you, because the assessment is absolutely spot on. According to
MSDN:

If none of the fields of the current instance and obj are reference types, the Equals method performs a byte-by-byte comparison of the two objects in memory. Otherwise, it uses reflection to compare the corresponding fields of obj and this instance.

So, the actual answer to that question is simply, "yes," with nothing more to say about it. But I want to provide my answer to that question as it occurred to me off the cuff. I'm a TDD practitioner and a C# veteran, for context.

Answering Questions on Language Knowledge

My answer, when I read the question was, "I don't remember what the default behavior of Equals is for value types -- I have to look that up." What surprised me wasn't my lack of knowledge on this subject (I don't find myself using value types very often), but rather my lack of any feeling that I should have known that. I mean, C# has been my main language for the last 4 years, and I've worked with it for more years than that besides. Surely, I just failed some hypothetical job interview somewhere, with a cabal of senior developers reviewing my quiz answers and saying, "for shame, he doesn't even know the default Equals behavior for value types." I'd be laughed off of stack overflow's C# section, to be certain.

And yet, I don't really care that I don't know that (of course, now I do know the answer, but you get what I'm saying). I find myself having an attitude of "I'll figure things out when I need to know them, and hopefully I'll remember them." Pursuing encyclopedic knowledge of a language's behavior doesn't much interest me, particularly since those goalposts may move, or I may wind up coding in an entirely different language next month. But there's something deeper going on here because I don't care now, but that wasn't always true -- I used to.

The Scientific Method

When I began to think back on this, I think the drop off in valuing this type of knowledge correlated with my adoption of TDD. It then became obvious to me why my attitude had changed. One of the more subtle value propositions of TDD is that it basically turns your programming into an exercise in the Scientific Method with extremely rapid feedback. Think of what TDD has you doing. You look at the code and think something along the lines of, "I want it to do X, but it doesn't -- why not?" You then write a test that fails. Next, you look at the code and hypothesize about what would make it pass. You then do that (experimentation) and see if your test goes green (testing). Afterward, you conduct analysis (do other tests pass, do you want to refactor, etc).

ScientificMethod

Now you're probably thinking (and correctly) that this isn't unique to TDD. I mean, if you write no unit tests ever, you still presumably write code for a while and then fire up the application to see if it's doing what you hypothesized that it would while writing it. Same thing, right?

Well, no, I'd argue. With TDD, the feedback loop is tight and the experiments are more controlled and, more importantly, isolated. When you fire up the GUI to check things out after 10 minutes of coding, you've doubtless economized by making a number of changes. When you see a test go green in TDD, you've made only one specific, focused change. The modify and verify application behavior method has too many simultaneous variables to be scientific in approach.

Okay, fine, but what does this have to do with whether or not I value encyclopedic language knowledge? That's a question with a slightly more nuanced answer. After years of programming according to this mini-scientific method, what's happened is that I've devalued anything but "proof is in the pudding" without even realizing it. In other words, I sort of think to myself, "none of us really knows the answer until there's a green test proving it to all of us." So, my proud answer to questions like, "wouldn't it work to use the default equals method for value types" has become, "dunno for certain, let's write a test and see."

False Certainty

Why proud? Well, I'll tell you a brief story about a user group I attended a while back. The presenter was doing a demonstration on Linq, closures, and deferred execution and he made the presentation interactive. He'd show us methods that exposed subtle, lesser known behaviors of the language in this context and the (well made) point was that these things were complex and trying to get the answers right was humbling.

It's generally knowledgeable people that attend user groups and often even more knowledgeable people that brave the crowd to go out on a limb and answer questions. So, pretty smart C# experts were shouting out their answers to "what will this method return" and they were getting it completely wrong because it was hard and it required too much knowledge of too many edge cases in too short a period of time. A friend of mine said something like, "man, I don't know -- slap a unit test on it and see." And... he's absolutely right, in my opinion. We're not language authors, much less compilers and runtimes, and thus the most expedient answer to the question comes not from applying amassed language knowledge but from experimentation.

Think now of the world of programming over the last 50 years. In times where compiles and executions were extremely costly or lengthy, you needed to be quite sure that you got everything right ahead of time. And doing so required careful analysis that could only be done well with a lot of knowledge. Without prodigious knowledge of the libraries and languages you were using, you would struggle mightily. But that's really no longer true. We're living in an age of abundant hardware power and lightning fast feedback where knowing where to get the answers quickly and accurately is more valuable than knowing them. It's like we've been given the math textbook with the answers in the back and the only thing that matters is coming up with the answers. Yeah, it's great that you're enough of a hotshot to get 95% of the answers right by hand, but guess what -- I can get 100% of them right and much, much faster than you can. And if the need to solve new problems arises, it's still entirely possible for me to work out a good way to do it by using the answer to deduce how the calculation process works.

Caveats

In the course of writing this, I can think of two valid objections/comments that people might have critiquing what I'm saying, so I'd like to address them. First of all, I'm not saying that you should write production unit tests to answer questions about how the framework/language works. Unit testing the libraries and languages that you use is an anti-pattern. I'm talking about writing tests to see how your code will behave as it uses the frameworks and languages. (Although, a written and then deleted unit test is a great, fast-feedback way to clarify language behavior to yourself.)

Secondly, I'm not devaluing knowledge of the language/framework nor am I taking pride in my ignorance of it. I didn't know how the default Equals behavior worked for value types yesterday and today I do. That's an improvement. The reason it's an improvement is that the knowledge is now stored in a more responsive cache. I maintain having the knowledge is trumped by knowing how to acquire it, and I look at reaching into my own personal memory stores as like having it in a CPU cache versus the memory of writing a quick test to see versus the disk space location of looking it up on the internet or asking a friend.

The more knowledge you have of the way the languages and frameworks you use work, the less time you'll have to sink into proving behaviors to yourself, so that's clearly a win. To continue the metaphor, what I'm saying is that there's no value or sense in going out preemptively and loading as much as you can from disk into the CPU cache so that you can show others that it's there. In our world, memory and disk lookups are just no longer expensive enough to make that desirable.

By

Visualization Mnemonics for Software Principles

Whether it’s because you want to be able to participate in software engineering discussions without having to surreptitiously look things up on your phone, or whether it’s because you have an interview coming up with a firm that wants you to be some kind of expert in OOP or something, you probably have at least some desire to be knowledgeable about development terms. This is probably doubly true of you, since ipso facto, you read blogs about software.

Toward that end, I’m writing this post. My goal is to provide you with a series of somewhat vivid ways to remember software concepts so that you’ll have a fighting chance at remembering what they’re about sometime later. I’m going to do this by telling a series of stories. So, I’ll get right to it.

Law of Demeter

Last week I was on a driving trip and I stopped by a gas station to get myself a Mountain Dew for the sake or road alertness. I grabbed the soda from the cooler and plopped it down on the counter, prompting the clerk to say, “that’ll be $1.95.” At this point, naturally, I removed my pants and the guy started screaming at me about police and indecent exposure. Confused, I said, “look, I’m just trying to pay you — I’ll hand you my pants and you go rummaging around in my pockets until you find my wallet, which you’ll take out and go looking through for cash. If I’m due change put it back into the wallet, unless it’s a coin, and then just put it in my pocket, and give me back the pants.” He pulled a shotgun out from behind the counter and told me that in his store, people obey the Law of Demeter or else.

PantlessLawOfDemeter

So what does the Law of Demeter say? Well, anecdotally, it says “give collaborators exactly what they’re asking for and don’t give them something they’ll have to go picking through to get what they want.” There’s a reason we don’t hand the clerk our pants (or even our wallet) at the store and just hand them money instead; it’s inappropriate to send them hunting for the money. The Law of Demeter encourages you to think this way about your code. Don’t return Pants and force clients of your method to get what they want by invoking Pants.Pockets[1].Wallet.Money — just give them a Money. And, if you’re the clerk, don’t accept someone handing you a Pants and you going to look for the money — demand the money or show them your shotgun.

Single Responsibility Principle

My girlfriend and I recently bought an investment property a couple of hours away. It’s a little house on a lake that was built in the 1950’s and, while cozy and pleasant, it doesn’t have all of the modern amenities that I might want, resulting in a series of home improvement projects to re-tile floors, build some things out and knock some things down. That kind of stuff.

One such project was installing a garbage disposal, which has two components: plumbing and electrical. The plumbing part is pretty straightforward in that you just need to remove the existing drain pipe and insert the disposal between the drain and the drainage pipe. The electrical is a little more interesting in that you need to run wiring from a switch to the disposal so that you can turn it on and off. Now, naturally, I didn’t want to go to all the hubub of creating a whole different switch, so I decided just to use one that was already there. The front patio light switch had the responsibility for turning the front patio light on and off, but I added a little to its burden, asking it also to control the garbage disposal.

That’s worked pretty well. So far the only mishap occurred when I was rinsing off some dishes and dropped a spoon in the drain while, at the same time, my girlfriend turned the front light on for visitors we were expecting. Luckily, I had only a minor scrape and a mangled spoon, and that’s a small price to pay to avoid creating a whole new light switch. And really, what’s the worst that could happen?

Well, I think you know the worst thing that could happen is that someone loses a hand due to this absurd design. When you run afoul of the Single Responsibility Principle, which could loosely be described as saying “do one thing only and do that thing well” or “have only one reason to change.” In my house, we have two reasons to change the state of the switch: turning on the disposal and turning on the light, and this creates an obvious problem. The parallel situation in code is true as well. If you have a class that needs to be changed whenever schema updates occur and whenever GUI changes occur, then you have a class that serves two masters and the possibility for changes to one thing to affect the other. Disk space is cheap and classes/namespaces/modules are renewable resources. When in doubt, create another one.

Open/Closed Principle

I don’t have a ton of time for TV these days, and that’s mainly because TV is so time consuming. It was a lot simpler when I just had a TV that got an analog signal over the air. But then, things went digital, so I had to take apart my TV and rewire it to handle digital signals. Next, we got cable and, of course, there I am, disassembling the TV again so that we can wire it up to get a cable signal. The worst part of that was that when I became furious with the cable provider and we switched to Dish, I was right back to work on the TV. Now, we have a Nintendo Wii, a DVD player, and a Roku, but who has the time to take the television apart and rewire it to handle each of these additional items? And if that weren’t bad enough, I tried hooking up an old school Sega Genesis last year, and my Dish stopped working.

… said no one, ever. And the reason no one has ever said this is that televisions that you purchase follow the Open/Closed Principle, which basically says that you should create components that are closed to modification, but open for extension. Televisions you purchased aren’t made to be disassembled by you, and certainly no one expects you to hack into the guts of the TV just to plug some device into it. That’s what the Coax/RCA/Component/HDMI/etc feeds are for. With the inputs and the sealed-under-warranty case, your television is open for extension, but closed for modification. You can extend its functionality by plugging anything you like into it, including things not even out yet, like an X-Box 12 or something. Follow this same concept for flexible code. When you write code, you strive to maximize flexibility by facilitating maintenance via extension and new code. If you program to interfaces or allow overriding of behavior via inheritance, life is a lot easier when it comes time to change functionality. So favor that over writing some juggernaut class that you have to go in and modify literally every sprint. That’s icky, and you’ll learn to hate that class and the design around it the same way you’d hate the television I just described.

Liskov Substitution Principle

I’m someone that usually eats a pretty unremarkable salad with dinner. You know, standard stuff: lettuce, tomatoes, crutons, scallions, carrots, and hemlock. One thing that I seem to do differently than most, however, is that I examine each individual item in the salad to see whether or not it will kill me before I put it into my mouth (a lot of other salad consumers seem to play pretty fast and loose with their lives, sheesh). I have a pretty simple algorithm for this. If the item is not hemlock, I eat it. If it is hemlock, I put it onto my plate to throw out later. I highly recommend eating your hemlock salad this way.

Or, you could bear in mind the Liskov Substitution Principle, which basically says that if you’re going to have an inheritance relationship, then derived types should be seamlessly swappable for their base type. So, if I have a salad full of Edibles, I shouldn’t have some derived type, Hemlock, that doesn’t behave the way other Edibles do. Another way to think of this is that if you have a heterogeneous collection of things in an inheritance hierarchy, you shouldn’t go through them one by one and say, “let’s see which type this is and treat it specially.” So, obey the LSP and don’t make hemlock salads for people. You’ll have cleaner code and avoid jail.

Interface Segregation Principle

Thank goodness for web page caching — it’s a life saver. Whenever I go to my favorite dictionary site, expertbeginnerdictionary.com (not a real site if you were thinking of trying it), it prompts me for a word to lookup and, when I type in the word and hit enter, it sends me the dictionary over HTTP, at which time I can search the page text with Ctrl-F to find my word. It takes such a long time for my browser to load the entire English dictionary that I’d be really up a creek without page caching. The only trouble is, whenever a word changes and the cache is invalidated, my next lookup takes forever while the browser re-downloads the dictionary. If only there were a better way…

… and there is. Don’t give me the entire dictionary when I want to look up a word. Just give me that word. If I want to know what “zebra” means, I don’t care what “aardvark” means, and my zebra lookup experience shouldn’t be affected and put at risk by changes to “aardvark.” I should only be depending on the words and definitions that I actually use, rather than the entire dictionary. Likewise, if you’re defining public interfaces in your code for clients, break them into minimum composable segments and let your clients assemble them as needed, rather than forcing the kitchen sink (or dictionary) on them.  The Interface Segregation Principle says that clients of an interface shouldn’t be forced to depend on methods that they don’t use because of the excess, pointless baggage that comes along.  Give clients the minimum that they need.

Dependency Inversion Principle

Have you ever been to an automobile factory?  It’s amazing to watch how these things are made.  They start with a car, and the car assembles its own engine, seats, steering wheel, etc.  It’s pretty amazing to watch.  And, for a real treat, you can watch these sub-parts assemble their own internals.  The engine builds its own alternator, battery, transmission, etc — a breathtaking feat of engineering.  Of course, there’s a downside to everything, and, as cool as this is, it can be frustrating that the people in the factory have no control over what kind of engine the car builds for itself.  All they can do is say, “I want a car” and the car does the rest.

I bet you can picture the code base I’m describing.  A long time ago, I went into detail about this piece of imagery, but I’ll summarize by saying that this is “command and control” programming where constructors of objects instantiate all of the object’s dependencies — FooService instantiates its own logger.  This runs afoul of the Dependency Inversion Principle, which holds that high level modules, like Car, should not depend directly on lower level modules, like Engine, but rather that both should depend on an abstraction of the Car-Engine interaction.  This allows the car and the engine to vary independently meaning that our automobile factory workers actually could have control over which engines go in which cars.  And, as described in the linked post, a code base making heavy use of the Dependency Inversion Principle tends to be composable whereas a command and control style code base is not, favoring instead the “car, build thyself” approach.  So to remember and understand Dependency Inversion principle ask yourself who should control what parts go in your car — the people building the car, or the car itself?  Only one of those ideas is preposterous.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

Starting to Unit Test: Not as Hard as You’d Think

I happened to read a post by Dror Helper the other day, in which he said:

I believe TDD and “unit tests” has been done a great injustice by not giving it a cooler name – preferable one that doesn’t have the word “test” in it – because it’s a PR disaster!

Wow, that had never occurred to me, and yet he’s spot on. “Unit test” is a wretched name for anything. It seems somehow to combine all the worst elements of propagating uncertainty in arithmetic with lab measurements and double checking all of your answers on a scantron exam. I just bored myself half to death typing that sentence, so it’s little wonder that the concept of a “unit test” is often met with a visceral “ugh.” I mean, we could at the least call the test suite a “verification checklist” or something, implying that you’re marking progress as you complete things. It’s not exactly a skydiving trip, but it has to beat “unit test.” But I digress.

The lack of appeal of the name, the feeling of already being pressed for time without taking something else on, and the natural resistance to the unknown all create barriers to entry when it comes to unit testing. In order to get started yourself, or especially to convince those around you to do the same, it’s necessary to overcome those barriers. I have a good bit of experience with this in a variety of environments and from a variety of roles. Over the last year, I did a string of blog posts that were essentially my talking scripts for a series of power point/demo talks I gave. These were meant to be an introduction to unit testing.

The series actually became pretty long, and as I was finishing it, I decided to put it into E-Book format. So, with the help of my editor, we turned it into fluid book, and with the help my publisher, we published it in all major E-Book formats for a cost of $4.99. Here is the book on the publisher’s site, and here is a link directly to Amazon
for Kindle readers (full disclosure: I’m experimenting with affiliate links, and that’s why I’m specifically linking Amazon directly).

The title of the book is “Starting to Unit Test: Not as Hard as You Think,” and I feel that the title really captures it. My goal was to trade practice purity for reducing the barriers to entry. In other words, the message was, “don’t feel like you have to start full bore with 100% coverage, TDD, and anything less is a failure — if you just introduce a few tests into a few places in your code, consider that a win.” I also did something else that I haven’t seen others do as much, which is to explain that some types of frameworks and code present unit testing nightmares, and that newbie unit testers should avoid them until they reach a higher belt.” What I’d like to see people take away from this book is a feeling of satisfaction from experiencing a sequence of small but real wins.

For you mythology buffs, this is Sysiphus actually making it to the top of the hill.

For you mythology buffs, this is Sysiphus actually making it to the top of the hill.

By and large, you could get this content by reading through my blog posts on the subject, but if you want it in a compact format, here it is. Not to mention, if you’re trying to sell your team on the merits of unit testing, a book is probably going to have more cachet than a series of blog posts, and the one link to the book is going to be easier to distribute than the giant collection of links for the individual posts/chapters. So get it if you’re interested, and encourage your team to get it if you’re trying to introduce the concept, especially since I close out the book by making a business case for unit testing (this making cases for best practices is actually going to be a theme of mine in the future).

Enjoy, and thanks for reading (the blog and, hopefully, the book)!

By

Encapsulation vs Inversion of Control

This is a post that I’ve had in my drafts folder for nearly two years. Well, I should say I’ve had the title and a few haphazard notes in my drafts folder for nearly two years; I’m writing the actual post right now. The reason I’ve had it sitting around for so long is twofold: (1) it’s sort of a subjective, tricky topic and (2) I’ve struggled to find a concrete stand to take. But, I think it’s relatively important, so I’ve always vowed to circle back to it and, well, here we are.

The draft was first conceived when I’d given a presentation about testability and inversion of control — specifically, using dependency injection via constructors and setters to achieve these ends. I talked, among other things about the Open/Closed Principle, and how this allows modifications to system behavior that favor adding over editing code. The idea here is that we can achieve new functionality with a minimum of violence to the code base and in a way for which it is easy to write unit tests. Everyone wins, right?

Well, not everyone was drinking the Kool-Aid. I fielded a question about encapsulation that, at the time, I hadn’t prepared to answer. “Doesn’t this completely violate encapsulation?” I was a little poleaxed, and sputtered out an answer off the cuff, saying basically, “well, not completely….” I mean, if you’ve written a class that takes ILogger in its constructor and uses it for logging, I control the logger implementation but you control when and how it is used. So, you encapsulate the logger’s usage but not its implementation and this stands in contrast to what would happen if you instantiated your own logger or implemented it yourself — you would encapsulate everything and nothing would be up to me as a client of your code. Certainly, you have more encapsulation. So, I finished: “…not completely…. but who cares?!?” And that was the end of the discussion since we were out of time anyway.

koolaid

I was never really satisfied with that answer but, as Creedence Clearwater says, “time and tears went by, and I collected dust.” When I thought back to that conversation, I would think to myself that encapsulation was passe in the same way that deep inheritance hierarchies were passe. I mean, sure, I learned that encapsulation was one of the four cornerstone principles of OOP, but so is inheritance, and that’s kind of going away with “favor composition over inheritance.” So, hey, “favor dependency injection over encapsulation.” Right? Still, I didn’t find this entirely satisfying — just good enough not to really occupy much of a place in my mind.

But then I remember a bit of a brouhaha last year over a Stack Overflow question. The question itself wasn’t especially remarkable (and was relatively quickly closed), but compiler author and programming legend Eric Lippert dropped by to say “DI is basically a bad idea.” To elaborate, he said:

There is no killer argument for DI because DI is basically a bad idea. The idea of DI is that you take what ought to be implementation details of a class and then allow the user of the class to determine those implementation details. This means that the author of the class no longer has control over the correctness or performance or reliability of the class; that control is put into the hands of the caller, who does not know enough about the internal implementation details of the class to make a good choice.

I was floored. Here we have one of the key authors of the C# compiler saying that the “D” in the SOLID principles was a “bad idea.” I would have dismissed it as blasphemy if (1) I were the sort to adopt approaches based on dogma and (2) he hadn’t helped author at least 4 more compilers than I have. And, while I didn’t suddenly rip the IoC containers out of my projects and instantiate everything inside constructors, I did revisit this topic in terms of my thoughts.

Maybe encapsulation, in the information hiding sense, isn’t so passe. And maybe DI isn’t a magic bullet. But why not? What’s wrong with the author of a class ceding control over some aspects of its behavior by allowing collaboration? And, isn’t any method parameter technically a form of DI, if we’re going to be pedantic about it?

The more I thought about it, the more I started to see competing and interesting use cases. Or, I should say, the more I started to think what class authors are telling their collaborators by using each of these techniques:

Encapsulation: “Don’t worry — I got this.”
Dependency Injection: “Don’t worry — if this doesn’t work, you can always change it.”

So, let’s say that you’re writing a Mars Rover or maybe a compiler or something. The attitude that you’re going to bring to that project is one in which correctness, performance and reliability are all incredibly important because you have to get it right and there’s little room for error. As such, you’re likely going to adopt implementation preference of “I’m going to make absolutely sure that nothing can go wrong with my code.”

But let’s say you’re writing a line of business app for Initrode Inc and the main project stakeholder is fickle, scatterbrained, and indecisive. Then you’re going to have an attitude in which ease and rapidity of system changes is incredibly important because you have to change it fast. As such, you’re likely to adopt an implementation preference of “I’m going to make absolutely sure that changing this without blowing everything up is easy.”

There’s bound to be somewhat of an inverse relationship between flexibility and correctness. As a classic example, a common criticism of Apple’s “walled garden” approach was that it was so rigid, while a common praise of the same was how well it worked. So I guess my take-away from this is that Dependency Injection and, more broadly, Inversion of Control, is not automatically desirable, but I also don’t think I can get to Eric’s take that it’s “basically a bad idea,” either. It’s simply an exchange of “more likely to be correct now” for “more likely to be correct later.” And in the generally agile world in which I live and with the kind of applications that I write, “later” tends to give more value.

Uncle Bob Martin once said, I believe in his Clean Coders video series, that (paraphrased) the second most important characteristic of good software is that it meet the customer requirements. The most important characteristic is that it be easy to change. Reason being, if a system is correct today but rigid, it will be wrong tomorrow when the customer wants changes. If the system is wrong today but flexible, it’s easy to make it right tomorrow. It may not be perfect, but I like DI because I need to be right tomorrow.

By

Implied Acceptance Criteria

I’m on a Scrum team these days, serving as Product Owner, and I was watching developers do functionality demos. This has generally been of the form of them walking me through the happy path, and then me taking it for a test drive and trying to put myself in a user, ‘cleverly’ typing “twenty five” into a text box that clearly wants an integer to see what happens.

Yellow screen of death? Generic error message? Scornful validation message such as “dude, what’s wrong with you?” Helpful validation message such as “your response for how many children you have cannot be a negative number, a decimal, or typeset nonsense?”

Universal Acceptance Criteria?

If what happens isn’t what I think should happen, it’s then off to check out the acceptance criteria/tests. Did it say anything in there about a validation message? Should it have to? Are there things that ought to be “universal acceptance criteria?”

To me, the answer to that question is, “sort of, but I’d prefer to think of them more as default/implied ACs.” And so I started to enumerate them on an internal wiki, kind of as an exercise for myself to see if there were certain things that I think should always be true. Maybe these are like the equivalent of code contracts, but contracts between the developer and the user. Here are some ideas that I came up with:

  • When user supplies invalid input, a message should be shown explaining why the input was not accepted.
  • The way to get back to where you just were should always be available and obvious (e.g. “cancel” button or “back” link or something).
  • Nothing that happens should ever result in a “yellow screen” or whatever equivalent indicates to the user that whatever is going wrong is something you never dreamed of (for production release, anyway — I have no issues with crashes like this in internal or beta tests as part of a “fail early” strategy)
  • If an exception occurs that the user would understand, it is explained to the user (e.g. “The connection to the database was lost.”)
  • There is never a point during the normal course of usage where the user thinks, “should I just, like, wait, or is it frozen?”
  • If it’s possible to prevent the user from doing the wrong thing, the user is prevented from doing the wrong thing (e.g. if user clicks “save” and that’s a long running operation, “save” button is disabled until it’s okay to click again)

Explaining Implied/Default Acceptance Criteria

These are some things that I think ought to be true of your application’s behavior unless there is a good reason to deviate. These are things that I generally think of when I’m working, even if nothing is spelled out explicitly. I think of these and I think of more, depending on the application context (e.g. is this something that should be permission controlled or is this something where the verbiage might be changed later?)

Over the years, I’ve developed a pretty lengthy mental checklist for what I consider good application experience, even without consciously realizing that I’ve done so for the most part. But I think it’s important for us to try to grow this evoked set in our minds as we go.

So what about you? Do you have anything you’d add to this list — any glaring omissions? I’m actually looking to tabulate some of these things into a working document and perhaps expand it to cover other kinds of minimal checklists for various scenarios (such as testing your own code via running the application, finding and fixing a bug, etc). Perhaps when I get it more fleshed out at some point, I’ll revisit and post again, but either way, I’d be interested to hear what you consider to be table stakes/minimum standards for how your applications interact with users.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.