DaedTech

Stories about Software

By

A Rider to the Law of Demeter

In case you were wondering who is responsible for the bounty provided by harvests each year, the answer is the goddess Demeter. In the age of global transport, harvests have stabilized somewhat, but that wasn’t always the case. There was a time when Hades, the God of the Underworld, captured Demeter’s daughter, Persephone, and held her prisoner. A desperate Demeter responded to this calamity as any parent would, by quitting her job and committing herself to a rescue effort. Trouble for the world was that Demeter being absent at her post led to widespread famine, prompting Zeus to intervene and some sort of compromise to be reached. And so, it stands to reason that a principle of software development discouraging the use of statements like Hotel.Rooms[123].Bathroom.Sink was named for her.

Demeter, of Law of Demeter fame
Read More

By

What Is a Best Practice in Software Development?

A while ago, I released a course on Pluralsight entitled, “Making the Business Case for Best Practices.”  There was an element of tongue-in-cheek to the title, which might not necessarily have been the best idea in a medium where my profitability is tied to maximizing the attractiveness of the title.  But, life is more fun if you’re smiling.

Anyway, the reason it was a bit tongue in cheek is that I find the term “best practice” to be spurious in many contexts.  At best, it’s vague, subjective, and highly context dependent.  The aim of the course was, essentially, to say, “hey, if you think that your team should be adopting practice X, you’d better figure out how to make a dollars and cents case for it to management — ‘best’ practices are the ones that are profitable.”  So, I thought I’d offer up a transcript from the introductory module of the course, in which I go into more detail about this term.  The first module, in fact, is called “What is a ‘Best Practice’ Anyway?”

Best Practice: The Ideal, Real and Cynical

The first definition that I’ll offer for “best practice” is what one might describe as the “official” version, but I’ll refer to it as the “ideal version.”  Wikipedia defines it as, “method or technique that has consistently shown results superior to those achieved with other means, and that is used as a benchmark.”  In other words, a “best practice” is a practice that has been somehow empirically proven to be the best.  As an example, if there were three possible ways to prepare chicken: serve it raw, serve it rare, and serve it fully cooked, fully cooked would emerge as a best practice as measured by the number of incidents of death and illness.  The reason that I call this definition “ideal” is that it implies that there is clearly a single best way to do something, and real life is rarely that neat.  Take the chicken example.  Cooked is better than undercooked, but there is no shortage of ways to fully cook a chicken – you can grill it, broil it, bake it, fry it, etc.  Is one of these somehow empirically “best” or does it become a matter of preference and opinion?

Barbecue

Read More

By

What Story Does Your Code Tell?

I’ve found that as the timeline of my life becomes longer, my capacity for surprise at my situation diminishes. And so my recent combination of types of work and engagements, rather than being strange in any way to me, is simply ammo for genuineness when I offer up the cliche, “variety is the spice of life.” Of late, I’ve been reviewing a lot of code in a coaching capacity as well as creating and giving workshops on story telling and creative writing. And given how much practice I’ve had over the last several years at multi-purposing my work, I’m quite vigilant for opportunities to merge story-telling and software advice. This post is one such opportunity, if a small one.

A little under a year ago, I offered up a post in which I suggested some visualization mnemonics to help make important software design principles more memorable. It was a relatively popular post, so I assume that people found it helpful. And the reason, I believe, that people found it helpful is that stories engage your brain far more than simple conveyance of information. When you read a white-paper explaining the Law of Demeter, the part of your brain that processes natural language activates and decodes the words. But when I tell you a story about a customer in a convenience store removing his pants to pay for a soda, your brain processes this text as if it were experiencing the event. Stories really engage the brain.

One of the most difficult aspects of writing code is to find ways to build abstraction and make your code readable so that others (or you, months later) can read the code as easily as prose. The idea is that code is read far more often than written or modified, so readability is important. But it isn’t just that the code should be readable — it should be understandable and, in some way, even memorable. Usually, understandability is achieved through simplicity and crisp, clear abstractions. Memorability, if achieved at all, is usually created via Principle of Least Surprise. It’s a cheat — your code is memorable not because it captivates the reader, but because the reader knows that mapping what she’s used to will probably work. (Of course, I recognize that atrocious code will be memorable in the vivid, conversational sense, but I’m talking about it being memorable in terms of its function and exact behavior).

It’s therefore worth asking what story your code is telling. Look at this code. What story is it telling?

if(person != null && person.Pants != null && person.Pants.Pockets.Length >= 2)
{
    var pocket = person.Pants.Pockets[1];
    if(pocket != null && pocket.Wallet != null)
    {
        pocket.Wallet.Remove(5);
        pocket.Wallet.Add(0.12);
    }
    person.Leave();
}

Read More

By

Cutting Down on Code Telepathy

Let’s say that you have some public facing method as part of an API:

public void Process(CustomerOrder order)
{
    //Whatever
}

CustomerOrder is something that you don’t control but that you do have to use. Life is good, but then let’s say that a requirement comes in saying that orders can now be post-dated, so you need to modify your API somewhat, to something like this:

public void Process(CustomerOrder order, DateTime orderDate)
{
    if(orderDate < DateTime.Now)
        throw new ArgumentException("orderDate");

    //Whatever
}

Great, but that was really painful because you learn that publishing changes to your public API is a real hassle for both yourself and for your users. After a lot of elbow grease and grumbling at the breaking change, though, things are stable once again. At least until a stakeholder with a lot of clout comes along and demands that it be possible to process orders through that method while noting that the order is actually a gift. You kick and scream, but to no avail. It has to go out and it has to hurt, and you're powerless to stop it. Grumbling, you write the following code, trying at least to sneak it in as a non-breaking change:

public void Process(CustomerOrder order, DateTime orderDate, bool isGift = false)
{
    if (orderDate < DateTime.Now)
        throw new ArgumentException("orderDate");
}

But then you start reading and realize that life isn't that simple and that you're probably going to break your clients anyway. Fed up, you decide that you're going to prevent yourself ever from being bothered by this again. You'll write the API that stands the test of time:

public void Process(CustomerOrder order, Dictionary options)
{
    if(((DateTime)options["orderDate"]) < DateTime.Now)
        throw new ArgumentException("options");
}

Now, this can never be wrong. CustomerOrder can't be touched, and the options dictionary can support any extensions that are requested of you from here forward. If changes need to be made, you can make them internally without publishing painful changes to the API. You have, fortunately, separated your concerns enough that you can simply deploy a new DLL that handles order processing, and any new values supplied by your clients can be handled. No more API changes -- just a quick update, some testing, and an explanatory Word document sent to your client explaining how to use the thing. Here's the first one:

public class ProcessingClient
{
    private OrderProcessor _orderProcessor = new OrderProcessor();
    public void SubmitAnOrder(CustomerOrder order)
    {
        var options = new Dictionary();
        options["orderDate"] = DateTime.Now;
        options["isGift"] = true;
        _orderProcessor.Process(order, options);
    }
}

There. A flexible API and the whole "is gift" thing neatly handled. If they specify that it's a gift, you handle that. If they specify that it isn't or just don't add that option at all, then you treat those equally as the default case. Important stakeholder satisfied, and you won't be bothered with nasty publications. So, all good, right?

Flexibility, but at what cost?

I'm guessing that, at a visceral level, your reaction to this sequence of events is probably to cringe a little, even if you're not sure why. Maybe it's the clunky use of a collection type instead of something slicker. Maybe it's the (original) passing of a Boolean to the method. Perhaps it's simply to demand to know why CustomerOrder is inviolate or why we couldn't work to an order interface or at least define an inheritor. Maybe "options" reminds you of ViewState.

But, whatever it is, doesn't defining a system boundary that doesn't need to change seem like a worthwhile goal? Doesn't it make sense to etch painful boundaries in stone so that all parties can rely on them without painful integration? And if you're going to go that route, doesn't it make sense to build in as much flexibility as possible so that all parties can continue to innovate?

Well, that brings me to the thing that makes me wince about this approach. I'm not a fan of shying away from the pain of "icky publish/integration" instead of going with "if it hurts, do it more and get better at it." That shying away doesn't make me wince in and of itself, but it does seem like the wrong turn at a fork in the road to what does make me wince, which is the irony of this 'flexible' approach. The idea in doing it this way is essentially to say, "okay, publishing sucks, so let's lock down the integration point so that all parties can work independently, but let's also make sure that we're future proof so we can add functionality later." Or, tl;dr, "minimize multi-party integration coordination with hyper-flexible API."

So where's the irony? Well, how about the fact that any new runtime-bound additions to "options" require an insane amount of coordination between the parties? You're now more coupled than ever! For instance, let's say that we want to add a "gift wrap" option. How does that go? Well, first I would have to implement the functionality in the code. Then, I'd have to test and deploy my changes to the server, but that's only the beginning. From there, I need to inform you what magic string to use, and probably to publish a Word document with an example, since it's easy to get this wrong. Then, once you have that document, I have to go through my logs and troubleshoot to discover that, "oh yeah, see that -- you're passing us 'shouldGiftwrap' when it should really be 'shouldGiftWrap' with a capital W." And if I ever change it, by accident or on purpose? You'll keep compiling and running, and everything will be normal except that, from your perspective, gift wrapping will just quietly stop working. How much pain have we saved in the end with this non-discoverable, counter-intuitive, but 'flexible' setup? Wouldn't it be better not to get cute and just make publishing a more routine, friction-free experience?

The take-away that I'd offer here is to consider something about your code and your software that you may not previously have considered. It's relatively easy to check your code for simple defects and even to write it in such a way to minimize things like duplication and code churn. We're good at figuring out how not to have to keep doing the same thing over and over as well and to simplify. Those are all good practices. But the new thing I'd ask you to consider is "how much out of band knowledge does this require between parties?"

It could be a simple scenario like this, with a public facing API. Or, maybe it's an internal integration point between your team and another team. But maybe it's even just the interaction surface between two modules, or even classes, within your code base. Do both parties need to understand something that's not part of the method signatures and general interaction between these entities? Are you passing around magic numbers? Are you relying on the same implicit assumptions in both places? Are there things you're communicating through a means other than the actual interactions or else just not communicating at all? If so, I suggest you do a mental exercise to ask yourself what would be required to eliminate that out of band communication. Otherwise, today's clever ideas become tomorrow's maintenance nightmares.

By

Agile Methodologies or Agile Software?

Over the last couple of months, I’ve been doing mostly management-y things, so I haven’t had a lot of trade craft driven motivations to pick Pluralsight videos to watch while jogging. In other words, I’m not coming up to speed on any language, framework, or methodology, so I’m engaging in undirected learning and observation. (I’m also shamelessly scouring other authors’ courses for ways to make my own courses better). That led me to watch this course about Agile fundamentals.

As I was watching and jogging, I started thinking about Agile Manifesto and the 14 years that have passed since its conception. “Agile” is undeniably here to stay and probably approaching “industry standard.” It’s become so commonplace, in fact, that it is an industry unto itself, containing training courses, conferences, seminars, certifications — the works. And this cottage industry around “getting Agile” has sparked a good bit of consternation and, frequently, derision. We as an industry, critics might say, got so good at planning poker and daily standups that we forgot about the relatively minor detail of making software. Martin Fowler coined a term, “flaccid Scrum” to describe this phenomenon wherein a team follows all of the mechanics of some Agile methodology (presumably Scrum) to the letter and still produces crap.

It’s no mystery how something like this could happen. You’ve probably seen it. The most common culprit is some “Waterfall” shop that decides it wants to “get Agile.” So the solution is to go out and get the coaches, the certifiers, the process experts, and the whole crew to teach everyone how to do all of the ceremonies. A few weeks or months, some hands on training, some seminars, and now the place is Agile. But, what actually happens is that they just do the same basic thing they’d been doing, more or less, but with an artificially reduced cycle time. Instead of shipping software every other year with a painful integration period, they now ship software quarterly, with the same painful integration period. They’re just doing waterfall on a scale of an eighth of the previous size. But with daily standups and retrospectives.

There may be somewhat more nuance to it in places, but it’s a common theme, this focus on the process instead of the “practices.” In fact, it’s so common that I believe the Software Craftsmanship Manifesto and subsequent movement was mainly a rallying cry to say, “hey, remember that stuff in Agile about TDD and pair programming and whatnot…? Instead of figuring out how to dominate Scrum until you’re its master, let’s do that stuff.” So, the Agile movement is born and essentially says, “let’s adopt short feedback cycles and good development practices, and here are some frameworks for that” and what eventually results is the next generation of software process fetishism (following on the heels of the “Rational Unified Process” and “CMM”).

That all played through my head pretty quickly, and what I really started to contemplate was “why?” Why did this happen? It’s not as if the original signatories of the manifesto were focused on process at the exclusion of practices by a long shot. So how did we get to the point where the practices became a second class citizen? And then, the beginnings of a hypothesis occurred to me, and so exists this post.

The Agile Manifesto starts off with “We are uncovering better
ways
of developing software…” (emphasis mine). The frameworks for this type of development were and are referred to as “Agile Methodologies.” Subtly but very clearly, the thing we’re talking about here — Agile — is a process. Here were a bunch of guys who got together and said, “we’ve dumped a lot of the formalism and had good results and here’s how,” and, perversely, the only key phrase most of the industry heard was “here’s how.” So when the early adopted success became too impressive too ignore, the big boys with their big, IBM-ish processes brought in Agile Process People to say, “here’s a 600 page slide deck on exactly how to replace your formal, buttoned-up waterfall process with this new, somehow-eerily-similar, buttoned-up Agile process.” After all, companies that have historically tended to favor waterfall approaches tend to view software development as a mashup of building construction and assembly line pipelining, so their failure could only, possibly be caused by a poorly engineered process. They needed the software equivalent of an industrial engineer (a process coach) to come in and show them where to place the various machines and mindless drones in their employ responsible for the software. Clearly, the problem was doing design documents instead of writing story cards and putting Fibonacci numbers on them.

The Software Craftsmanship movement, I believe, stands as evidence to support what I’m saying here. It removes the emphasis altogether from process and places it, in very opinionated fashion, on the characteristics of the actual software: “not only working software, but also well-crafted software.” (emphasis theirs) I can’t speak exactly to what drove the creation of this document, but I suspect it was at least partially driven by the obsession with process instead of with actually writing software.

MolotovCocktail

All of this leads me to wonder about something very idly. What if the Agile Manifesto, instead of talking about “uncovering better ways,” had spoken to the idea of “let’s create agile software?” In other words, forget about the process of doing this altogether, and let’s simply focus on the properties of the software… namely, that it’s agile. What if it had established a definition that agile software is software that should be able to be deployed within, say, a day? It’s software that anyone on the team can change without fear. It’s software that’s so readable that new team members can understand it almost immediately. And so on.

I think there’s a deep appeal to this concept. After all, one of the most annoying things to me and probably to a lot of you is having someone tell me how to solve a problem instead of what their problem is, when asking for help. And, really, software development methodologies/processes are perhaps the ultimate example of this. Do a requirements phase first, then a design phase, then an implementation phase, etc. Or, these days, write what the users want on story cards, have a grooming session with the product owner, convene the team for planning poker, etc. In both cases, what the person giving the direction is really saying is, “hey, I want you to produce software that caters to my needs,” but instead of saying that and specifying those needs, they’re telling you exactly how to operate. What if they just said, “it should be possible to issue changes to the software with the press of a button, it needs to be easy for new team members to come on board, I need to be able to have new features implemented without major architectural upheaval, etc?” In other words, what if they said, “I need agile software, and you figure out how to do that?”

I can’t possibly criticize the message that came out of that meeting of the minds and gave birth to the Agile Manifesto. These were people that bucked entrenched industry trends, gained traction, and pulled it off with incredible success. They changed how we conceive of software development and clearly for the better. And it’s entirely possible that any different phrasing would have made the message either too radical or too banal for widespread adoption. But I can’t help but wonder… what if the call had been for agile software rather than agile methods.