I’m thinking this is going to wind up being a shorter post, and so might some others going forward. Fact of the matter is that I’m burning the candle at both ends a bit lately. I’m under contract for my next Pluralsight course, and I’m also mired in the bureaucratic morass of marching toward a real estate closing, so between that and 55-60-hour work weeks, I’m a little pressed for time. But, that said, we’ll see how it goes.
I was sitting in a meeting today where participants were discussing a recent initiative. (I was, at this point, a relatively passive observer with no horse in the race.) The gist of it was that the initiative was something that would be mildly pleasing to the user base but had some cost associated with it. It was accepted that this would be a bit of a financial loss on the balance sheet but that the ROI would be an eventual bolstering of the brand via goodwill and reputation points, so to speak. The leader of the meeting put a halt to this talk of generalities saying something to the effect of, “okay, so how much is it costing us, how many users are aware of it, and how positive was their response?” The people who had been discussing this got sort of deer-in-headlights. They seemed nonplussed that they were being asked to quantify something like this. I loved the questions, personally. I thought, “yeah, get those metrics, and then we can at least begin modeling scenarios for evaluating if this is worth doing.”
After that meeting, I retreated to my desk, and worked into the evening at the kind of tidying up that I can only seem to get done after business hours when I close the office door. One of the miscellaneous tasks was to come up with material to present tomorrow at the inaugural team lunch and learn — an initiative that I recently started. I don’t have a talk in my back pocket at the moment, so I was weighing options for it and settled on showing a video of a Michael Feathers talk about the synergy between testability and good design. I’ve recently introduced unit testing to the team, and it seems like a good perspective about unit testing beyond some of the more standard rationales. In the early part of the talk, Michael says (paraphrased from memory) that he often annoys developers by telling them that if their design isn’t testable, it’s not a good design. It’s quite a concept — if it can’t be easily verified, it isn’t very good.
Earlier this evening, I was relaxing and working my way through The Clean Coder by Uncle Bob Martin. In it, Bob cites something that Kent Beck told him about arguing — if an argument goes on for more than five minutes or so, then it’s really a religious war. If arguments are being based on facts, data, and empirical evidence, they’re settled pretty quickly. Rather than continuing to argue and get angry, both sides should regroup, do some research, and revisit the issue later. You can’t make a persuasive case for something without raw, hard data.
This struck me as interesting. Within the span of half a day, I was exposed to three separate events with a very fundamental and powerful underlying theme: “prove it.” If it can’t be proved, what good is it? As the characters in the Game of Thrones books seem to like to say, “words are wind.” Don’t tell me that users like it so it’s worth doing; show me evidence of that. Don’t tell me that your design is great; show me that it works for the required inputs. Don’t argue with bluster; show me that you’re right in such excruciating detail that there is simply no argument.
I don’t think anybody would find what I’m saying here controversial — proving and demonstrating are obviously better than simply claiming things without support. But I think the challenge for all of us is to prove (or at least to attempt to prove) things that we wouldn’t necessarily think of proving. How do you prove that user goodwill justifies spending X dollars? I don’t know, but it’s an interesting challenge. And beyond being an interesting challenge, it’s the kind of question that being able to answer will set you apart from the people you work with and make you in high demand. Do everything that you do in your professional life operating under the assumption that someone will audit your thought process and demand, on the spot, to know why you decided to do what you did. Why did you spend a day changing all methods in the code base from Pascal to camel case? Can you justify what that cost your group in terms of labor? How would you make the case for that?
I’m not suggesting that we should all operate constantly as if there were some insane, whip-cracking micromanager monitoring our every movement. What I am saying is that you can stand out by constantly and easily demonstrating the value and sense in the decisions that you make and the actions that you decide to take. Be that guy or gal. It’s less common than you think.
I’m surprised this post has gone this long without comment. It seems like a no-brainer that doing your homework gives you an advantage, but it’s rare enough that you’d think it was a secret. Right or wrong matters much less than the answer to “why?”.
Definitely. I think that “why” essentially makes “right vs wrong” a typo instead of a mystery.
You might be interested in the work of Tudor Girba on Humane Assessment. He has done a lot of work on making it easy and fast to build the tools allowing you to do the measurements for this
I googled this and watched an introductory video that seemed pretty general. I’m going to poke around for more specifics about his moose platform when I’m not ready to fall asleep in my office chair. Thanks for the tidbit to look at.