DaedTech

Stories about Software

By

Do Programmers Practice Computer Science?

I’ve gotten some questions about the idea of what we do being scientific or not, and that raises some interesting discussion points.  This is a “You Asked for It” post, and in this one, I’m just going to dive right into the reader question.  Don’t bury the lead, as it were.

Having attended many workshops on Agile from prominent players in the field, as well as working in teams attempting to use Agile I can’t help but think that there is nothing Scientific about it at all. Most lectures and books pander to pop psychology, and the tenants of Agile themselves are not backed up by any serious studies as far as I’m aware.

In my opinion we have a Software Engineering Methodology Racket. It’s all anecdotes and common sense, with almost no attention on studying outcomes and collecting evidence.

So my question is: Do you think there is a lack of evidence based software engineering and academic rigor in the industry? Do too many of us simply jump on the latest fad without really questioning the claims made by their creators?

I love this.  It’s refreshingly skeptical, and it captures a sentiment that I share and understand.  Also notice that this isn’t a person saying, “I’ve heard about this stuff from a distance, and it’s wrong,” but rather, “I’ve lived this stuff and I’m wondering if the emperor lacks clothes.”  I try to avoid criticizing things unless I’ve lived them, myself, and I tend to try something if my impulse is to criticize.

ScientificMethod

I’ll offer two short answers to the questions first.  Yes and yes.  There is certainly a lack of evidence-based methodology around what we do, and I attribute this largely to the fact that it’s really, really hard to gather and interpret the evidence.  And, in the absence of science, yes, we collectively tend to turn to tribal ritual.  It’s a God of the Gaps scenario; we haven’t yet figured out why it rains, so we dance and assume that it pleases a rain God. Read More

By

What Is a Best Practice in Software Development?

A while ago, I released a course on Pluralsight entitled, “Making the Business Case for Best Practices.”  There was an element of tongue-in-cheek to the title, which might not necessarily have been the best idea in a medium where my profitability is tied to maximizing the attractiveness of the title.  But, life is more fun if you’re smiling.

Anyway, the reason it was a bit tongue in cheek is that I find the term “best practice” to be spurious in many contexts.  At best, it’s vague, subjective, and highly context dependent.  The aim of the course was, essentially, to say, “hey, if you think that your team should be adopting practice X, you’d better figure out how to make a dollars and cents case for it to management — ‘best’ practices are the ones that are profitable.”  So, I thought I’d offer up a transcript from the introductory module of the course, in which I go into more detail about this term.  The first module, in fact, is called “What is a ‘Best Practice’ Anyway?”

Best Practice: The Ideal, Real and Cynical

The first definition that I’ll offer for “best practice” is what one might describe as the “official” version, but I’ll refer to it as the “ideal version.”  Wikipedia defines it as, “method or technique that has consistently shown results superior to those achieved with other means, and that is used as a benchmark.”  In other words, a “best practice” is a practice that has been somehow empirically proven to be the best.  As an example, if there were three possible ways to prepare chicken: serve it raw, serve it rare, and serve it fully cooked, fully cooked would emerge as a best practice as measured by the number of incidents of death and illness.  The reason that I call this definition “ideal” is that it implies that there is clearly a single best way to do something, and real life is rarely that neat.  Take the chicken example.  Cooked is better than undercooked, but there is no shortage of ways to fully cook a chicken – you can grill it, broil it, bake it, fry it, etc.  Is one of these somehow empirically “best” or does it become a matter of preference and opinion?

Barbecue

Read More

By

Avoiding the Perfect Design

One of the peculiar ironies that I’ve discovered by watching the way a lot of different software shops work is that the most intense moments of exuberance about software seem to occur in places where software development happens at glacial speeds. If you walk into an agile shop or a startup or some kind of dink-and-dunk place that bangs out little CRUD apps, you’ll hear things like, “hey, a user said she thought it’d be cool if she could search her order history by purchase type, so let’s throw that in and see how it goes.” If it goes insanely well, there may be celebrations and congratulations and even bonuses changing hands which, to be sure, makes people happy. But their happiness is Mercury next to the blazing Sun of an ivory tower architect describing what a system SHALL do.

“There will be an enterprise service bus. That almost goes without saying. The presentation tier and the business tier will be entirely independent of one another, and literally any sort of pluggable module that you dream up as a client can communicate with any sort of rules engine embedded within the business tier. Neither one will EVER know about the other’s existence. The presentation layer collaborators are like Schrodinger and the decision engines are like the cat!

And the clients. Oh yes, there will be clients. The main presentation tier client is a mobile staging environment that will be consumed by Android, iOS, Windows Phone, Blackberry, and even some modified Motorolla walkie-talkies. On top of the mobile staging environment will be a service adapter that makes it so that clients don’t need to worry about whether they’re using SOAP or REST or whatever comes next. All of those implementations will hide behind the interface. And that’s just the mobile space. There are more layers and subtleties in the browser and desktop spaces, since both of those types of clients may be SPAs, other thick clients, thin clients, or just leaf nodes.

Wait, wait, wait, I’m not finished. I haven’t even told you about the persistence factories yet and my method for getting around the CAP theorem. The performance will be sublime. We’re talking picoseconds. We’re going to be using dynamically generated linear programming algorithms to load balance the vertical requests among the tiers, and we’re going to take a page out of the quantum computing book to introduce a new kind of three state boolean… oh, sorry, you had a question?”

“Uh, yeah. Why? I mean, who is going to use this thing and what do they want with it?”

“Everyone. For everything. Forever.”

You back out slowly as the gleam in his eye turns slightly worrisome and he starts talking about the five year plan that involves this thing, let’s call it HAL, achieving sentience, bringing humankind to the cusp of the singularity, and uploading the consciousnesses of all network and enterprise architects.

Singularity

Like I said, the Sun to your Mercury. Has your puny startup ever passed the Turing Test? Well, his system has… as spec’ed out in a document management system with 8,430 pages of design documents and a Visio diagram that’s rumored to have similar effects to the Ark of the Covenant. And that, my friends, is why I think that a failing ATDD scenario should be the absolute first thing anyone who says, “I want to get into programming” learns to do.

Now to justify that whiplash-inducing segue. I wrote a book about unit testing in which I counseled complete initiates to automated testing to forgo TDD and settle for understanding the mechanics of automated tests and test runners first before making the leap. I stand by that advice, but I do so because I think that there is a subtle flaw to the way that most people currently get started down the programming path.

I was watching a Pluralsight course about NUnit to brush up on their latest and greatest assertion semantics, and the examples were really well done. In particular, there was a series of assertions oriented around a rudimentary concept of a role playing game with enumerations of weapons, randomization of events, and hit points. This theme exercised concepts like ranges, variance, collection cardinality, etc and it did so in a way that lent itself to an easy mental model. The approach was very much what mine would have been as well (I wouldn’t have come at this with TDD because there’d have been a lot of ‘downtime’ writing the production code as opposed to just showing the assert methods).

Nevertheless, it’s been a while since I’ve watched someone write tests against a pre-baked system when they weren’t characterization tests in a legacy rescue, and the experience was sort of jarring. I couldn’t help but think, “I wouldn’t want to write these tests if I were in his position — why bother when the code is already done?” Weird as it sounds from a big advocate of testing, writing tests after you’ve completed your implementation feels like a serious case of “going through the motions” in the same way that developers fill out random “SDLC” artifacts for no other purpose than to get PMPs to leave them alone.

And that’s where the connection to the singularity architect comes in. One of the really nice, but subtle perks of the TDD (especially ATDD) approach is that it forces you to define exit criteria before you start doing things. For instance, “I know I’ll be done with this development effort when my user can search her order history by purchase type.” Awesome — you’re well on your way because you’ve (presumably) agreed with stakeholders ahead of time when you can stop coding and declare victory. The next thing is to prove it, and you can approach this in the same way that you might approach fixing a leaking pipe under your sink. Turn the water on, observe the leak, turn the water off, fix the leak, turn the water back on, observe that there is no leak. Done.

In the case of the search, you write a client call to your web service that supplies a “purchase type” parameter and you say that you’re done when you get a known result set back, instead of the current error message: “I do not understand this ‘purchase type’ nonsense you’ve sent — 400 for you!” Then you scurry off to code, and you just keep going until that test that you’ve written turns green and all of the other ones stay green. There. Done, and you can prove it. Ship it.

Our poor architect never knows when he’s done (and we know he’ll never be done). The origin of this Sisyphean struggle started with hobby programming or a CS degree or something. It started with unbounded goals and the kinds of open-ended tasks that allow hobbyists to grow and students to excel. Alright, you’ve got the A, but try to play with it. See if you can make it faster. Try adding features to it. Extra credit! Sky’s the limit! At some point, a cultural norm emerges that says it’s more about the journey than the destination. And then you don’t rise through the ranks by automating for the sake of solving people’s problems but rather by building ever-more impressive juggernauts, leveraging the latest frameworks, instrumented with the most analytics, and optimized to run in O(Planck Time).

I really would like to see initiates to the industry learn to set achievable (but slightly uncomfortable) goals with a notion of value and then reach them. Set a beneficial goal, reach it, rinse, repeat. The goal could be “I want to learn Ruby and I’ll consider a utility that sorts picture files to be a good first step.” You’re adding to your skill set as a developer and you have an exit criteria It could be something for a personal project, a pro-bono client, or for pay. But tie it back to an outcome and assess whether that outcome is worthwhile. This approach will prevent you from shaving microseconds off of an app that runs overnight on a headless server and it will prevent you from introducing random complexity and dependency to an app because you wanted to learn SnazzyButPointless.js. True, the approach will stop you from ever delighting in design documents that promise the birth of true artificial intelligence, but it will also prevent you from experiencing the dejection when you realize it ain’t ever gonna happen.

By

You Want an Estimate? Give Me Odds.

I was asked recently in a comment what I thought about the “No Estimates Movement.” In the interests of full disclosure, what I thought was, “I kinda remember that hashtag.” Well, almost. Coincidentally, when the comment was written to my blog, I had just recently seen a friend reading this post and had read part of it myself. That was actually the point at which I thought, “I remember that hashtag.”

That sounds kind of flippant, but that’s really all that I remember about it. I think it was something like early 2014 or late 2013 when I saw that term bandied about in my feed, and I went and read a bit about it, but it didn’t really stick with me. I’ve now gone back and read a bit about it, and I think that’s because it’s not really a marketing teaser of a term, but quite literally what it advertises. “Hey, let’s not make estimates.” My thought at the time was probably just, “yeah, that sounds good, and I already try to minimize the amount of BS I spew forth, so this isn’t really a big deal for me.” Reading back some time later, I’m looking for deeper meaning and not really finding it.

Oh, there are certainly some different and interesting opinions floating around, but it really seems to be more bike-sheddy squabbling than anything else. It’s arguments like, “I’m not saying you shouldn’t size cards in your backlog — just that you shouldn’t estimate your sprint velocity” or “You don’t need to estimate if all of your story cards are broken into small enough chunks to be ones.” Those seem sufficiently tactical that their wisdom probably doesn’t extend too far beyond a single team before flirting with unknowable speculation that’d be better verified with experiments than taken as wisdom.

The broader question, “should I provide speculative estimates about software completion timelines,” seems like one that should be answered most honestly with “not unless you’re giving me pretty good odds.” That probably seems like an odd answer, so let me elaborate. I’m a pretty knowledgeable football fan and each year I watch preseason games and form opinions about what will unfold in the regular season. I play fantasy football, and tend to do pretty well at that, actually placing in the money more often than not. That, sort of by definition, makes me better than average (at least for the leagues that I’m in). And yet, I make really terrible predictions about what’s going to happen during the season.

ArmchairQuarterback

At the beginning of this season, for instance, I predicted that the Bears were going to win their division (may have been something of a homer pick, but there it is). The Bears. The 5-11 Bears, who were outscored by the Packers something like 84-3 in the first half of a game and who have proceeded to fire everyone in their organization. I’m a knowledgeable football fan, and I predicted that the Bears would be playing in January. I predicted this, but I didn’t bet on it. And, I wouldn’t have bet even money on it. If you’d have said to me, “predict this year’s NFC North Division winner,” I would have asked what odds you were giving on the Bears, and might have thrown down a $25 bet if you were giving 4:1 odds. I would have said, when asked to place that bet, “not unless you’re giving me pretty good odds.”

Like football, software is a field in which I also consider myself pretty knowledgeable. And, like football, if you ask me to bet on some specific outcome six months from now, you’d better offer me pretty good odds to get me to play a sucker’s game like that. It’d be fun to say that to some PMP asking you to estimate how long it would take you to make “our next gen mobile app.” “So, ballpark, what are we talking? Three months? Five?” Just look at him deadpan and say, “I’ll bite on 5 months if you give me 7:2 odds.” When he asks you what on earth you mean, just patiently explain that your estimate is 5 months, but if you actually manage to hit that number, he has to pay you 3.5 times the price you originally agreed on (or 3.5 times your salary if you’re at a product/service company, or maybe bump your equity by 3.5 times if it’s a startup).

See, here’s the thing. That’s how Vegas handles SWAGs, and Vegas does a pretty good job of profiting from the predictions racket. They don’t say, “hey, why don’t you tell us who’s going to win the Super Bowl, Erik, and we’ll just adjust our entire plan accordingly.”

So, “no estimates?” Yeah, ideally. But the thing is, people are going to ask for them anyway, and it’s not always practical to engage in a stoic refusal. You could politely refuse and describe the Cone of Uncertainty. Or you could point out that measuring sprint velocity with Scrum and extrapolating sized stories in the backlog is more of an empirically based approach. But those things and others like them tend not to hit home when you’re confronted with wheedling stakeholders looking to justify budgets or plans for trade shows. So, maybe when they ask you for this kind of estimate, tell them that you’ll give them their estimate when they tell you who is going to win next year’s super bowl so that you can bet your life savings on their guarantee estimate. When they blink at you dubiously, smile, and say, “exactly.”

By

Is This Problem Worth Solving?

I’ve done a little bit of work lately on a utility that reads from log files that are generated over the course of a month. Probably about 98% of the time, users would only be interested in this month’s file and last month’s file. How should I handle this?

At different points in my career, I’d have answered this question differently. Early on, my answer would have been to ignore the sentence about “98% of the time” and just implement a solution wherein the user had to pick which file or files he wanted. For a lot of my career, I would have written the utility to read this month and last month’s files by default and to take an optional command line parameter to specify a different file to read (“sensible defaults”). These days, my inclination is more toward just writing it to read this month’s and last month’s file, shipping it, and seeing if anyone complains — seeing if that 2% is really 2% or if, maybe, it’s actually 0%.

Part of this evolution in me was the evolution of the software industry itself. When I started out doing a lot of C and C++ in the Linux/Unix worlds, gigantic documents and users’ manuals were the norm. If you were a programmer, it was your responsibility to write massive, sprawling APIs and if you used someone’s code, it was your responsibility to use those massive APIs and read those gigantic manuals. So, who cares what percentage of time a user might do what? You just write everything that could ever possibly happen into your code and then tell everyone to go read the manual.

Then things started to get a little more “agile” and YAGNI started to prevail. On top of that, the user became more of a first class citizen, so “let’s default to the most common path” became the attractive thing to do and “RTFM” went the way of the dodo. The iconic example would be a mid 2000’s Windows or web application that would give you a sensible experience and then offer a dizzying array of features in some “Advanced Settings” window.

The next step was born with the release of the iPhone when it started to become acceptable and then normal to write dead simple things that didn’t purport to do everything. Apple’s lead here was, “this is the app, this is what it does, take it or leave it.” The “advanced settings” window was replaced by “we’ll tell you what settings you want,” which requires no window.

This shifting environment over the last 15 years informed my perspective but wasn’t entirely responsible for it. I’d say what was responsible for the shift were two realizations. First, I realized, “a ‘business spec’ isn’t nearly as important as understanding your users, their goals, and how they will use what you give them.” It became important to understand that one particular use case was so dominant that making it a pleasant experience at the cost of making another experience less pleasant was clearly worthwhile. The second realization came years later, when I learned that your users do, frequently, want you to tell them how to use your stuff.

Some of this opinion arose from spending good bits of time in a consulting capacity, where pointing out problems and offering a handful of solutions typically results in, “well, you’re the expert — which one should I do?” You hear that enough and you start saying instead, “here’s a problem I’ve noticed and here’s how I’ve had success in the past fixing this problem.” It makes sense when you think about it. Imagine having someone out to fix your HVAC system and he offers you 4 possible solutions. You might ask a bit about cost/benefit and pros/cons, but what you’ll probably wind up saying is, “so…. what would you do and/or what should I do?”

TerrifiedOfFurnace

There’s an elegance to coding up what a user will do 98% of the time and just shipping that, crude as it sounds. As I mentioned, it will confirm whether your “98%” estimate was actually accurate. But, more importantly, you’ll get a solution for 98% of the problem to market pretty quickly, letting the customer realize the overwhelming majority of ROI right away. On top of that, you’ll also not spend a bunch of money chasing the 2% case up front before knowing for sure what the impact of not having it will be. And finally, you add the possibility for a money-saving work-around. If the utility always reads this month’s and last month’s files, and we need to read one from a year ago… rather than writing a bunch of code for that… why not just rename the file from a year ago and run it? That’ll cost your client 10 seconds per occurrence (and these occurrences are rare, remember) rather than hundreds or thousands of dollars in billable work as you handle all sorts of edge cases around date/time validation, command line args, etc.

I always wince a little when I offer anecdotes of the form, “when I was younger, I had position X but I’ve come to have position Y” because it introduces a subtle fallacy of “when you get wiser like me, you’ll see that you’re wrong and I’m right.” But in this case, the point wasn’t to discredit my old ways of thinking, per se, but rather to explain how past experiences have guided the change in my approach. I’ve actually stumbled a fair bit into this, rather than arrived here with a lot of careful deliberation. You see, there were a lot of times that I completely whiffed on considering the 2% case at all and just shipped, realizing to my horror only after the fact that I’d forgotten something. Bracing for getting reamed as soon as someone figured out that I’d overlooked the 2% case, I battened down the hatches and prepared to face the fire only to face… absolutely nothing. No one noticed or cared, and the time spent on the 2% would have been a total waste.

So next time you find yourself thinking about how to handle some bizarre edge case or unlikely scenario, pull back and ask yourself whether handling that is worth delaying the normal cases or not. Ask yourself if your user can live with a gap in the edge case or work-around it. And really, ask yourself what the ROI for implementation looks like. This last exercise, more so than learning any particular framework, language or library, is what distinguishes a problem solver from a code slinger.