DaedTech

Stories about Software

By

The ROI for Security Training

Editorial note: I originally wrote this post for the ASPE blog.  You can check out the original here, at their site.  While you’re there, check out their catalog of online and in-person training courses.

When it comes to IT’s relationship with “the business,” the two tend to experience a healthy tension over budget.  At the risk of generalizing, IT tends to chase promising technologies, and the business tends to reign that in.  And so it should go, I think.

The IT industry moves quickly and demands constant innovation.  For IT pros to enjoy success, they must keep up, making sure to constantly understand a shifting landscape.  And they also operate under a constant directive to improve efficiency which, almost by definition, requires availing themselves of tools.  They may write these tools or they might purchase them, but they need them either way.  In a sense, you can think of IT as more investment-thirsty than most facets of business.

The business’s leadership then assumes the responsibility of tempering this innovation push.  This isn’t to say that the business stifles innovation.  Rather, it aims to discern between flights of fancy and responsible investments in tech.  As a software developer at heart, I understand the impulse to throw time and money at a cool technology first and figure out whether that made sense second.  The business, on the other hand, considers the latter sensibility first, and rightfully so.

A Tale of IT and the Business

Perhaps a story will serve as a tangible example to drive home the point.  As I mentioned, my career background involved software development first.  But eventually, I worked my way into leadership positions of increasing authority, ending up in a CIO role, running an IT department.

One day while serving in that capacity, the guy in charge of IT support came to me and suggested we switch data centers.  I made a snap judgement that we should probably do as he suggested, but doing so meant changing the budget enough that it required a conversation with the CFO and other members of the leadership team.

Anticipating their questions and likely pushback, I asked the IT support guy to put together a business case for making the switch.  “Explain it in terms of costs and benefits such that a non-technical person could understand,” I advised.

This proved surprisingly difficult for him.  He put together documentation talking about the relative rates of power failures, circuit redundancy, and other comparative data center statistics.  His argument in essence boiled down to one data center having superior specs than the other and vague proclamations about best practices.

I asked him to rework this argument, suggesting he articulate the business case using sort of a mad lib: “If we don’t make this change, we have a ______% chance of experiencing problem _______, which would cost $_______.”

This proved much more fruitful. We made the case to the CFO and then made the switch.

Read More

By

How Much Unit Testing is Enough?

Editorial note: I originally wrote this post for the TechTown blog.  You can check out the original here, at their site.  While you’re there, have a look at their training and course offerings.

I work as an independent consultant, and I also happen to have created a book and some courses about unit testing.  So I field a lot of questions about the subject.  In fact, companies sometimes bring me in specifically to help their developers adopt the practice.  And those efforts usually fall into a predictable pattern.

Up until the moment that I arrive, most of the developers have spent zero time unit testing.  But nevertheless, they have all found something to occupy them full time at the office.  So they must now choose: do less of something else or start working overtime for the sake of unit testing.

This choice obviously inspires little enthusiasm.  Most developers I meet in my travels are conscientious folks.  They’ve been giving it their all and so they interpret this new thing that takes them away from their duties as a hindrance.  And, in spite of signing a contract to bring me in, management harbors the same worry.  While they buy into the long term benefit, they fret the short term cost.

All of this leads to a very predictable question.  How much unit testing is enough?

Clients invariably ask me this, and usually they ask it almost immediately.  They want to understand how to spend the minimum amount of time required to realize benefits, but not a second longer.  This way they can return to the pressing matters they’ve delayed for the sake of learning this new skill.

Read More

By

What Problems Do Microservices Solve?

Editorial note: I originally wrote this post for the TechTown blog.  You can check out the original here, at their site.  While you’re there, have a look at the tech courses they offer.

Do you find that certain industry buzzwords set your teeth on edge?  If so, I assure you that you have company.  Buzzwords permeate every professional space, but it seems that tech really knows how to attract them.  Internet of things.  The cloud.  Big data. DevOps.  Agile and lean.  And yes, microservices.

Because of our industry’s propensity for buzzwords, Gartner created something it calls the hype cycle.  It helps their readers and clients evaluate how much attention to pay to emergent ideas.  They can then separate vague fluff from ideas that have arrived to stay.  And let’s be honest — it’s also a funny, cathartic concept.

If you’ve tired of hearing the term microservices, I can understand that.  As of 2016, Gartner put it at the peak of inflated expectations.  This means that the term had achieved maximum saturation a year ago, and our collective fatigue will drive it into the trough of disillusionment.

And yet the concept retains value.  Once the hype fades and it makes its way toward the plateau of productivity, you’ll want to understand when, how, and why to use it.  So in a nod toward pragmatism, I’m going to talk about microservices in terms of the problems that they solve.

First, What Are Microservices?

Before going any further, let me offer a specific definition.  After all, relying on vague, hand-waving definitions is the main culprit in buzzword fatigue.  I certainly don’t want to contribute to that.

Industry thought leader Martin Fowler offers a detailed treatment of the subject.

In short, the microservice architectural style [1] is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery.

Now, understand something.  The architectural trade-off here is nothing new.  In essence, it describes centralizing intelligence versus distributing it.  With a so-called monolith, clients have it easy.  They call the monolith, which handles all details internally.  When you distribute intelligence, on the other hand, clients have more burden to figure out how to compose calls and interactions.

The relative uniqueness of the microservices movement comes from taking that tradeoff and layering atop it delivery mechanisms and the concept of atomic business value.  Organizations touting valuable microservices architectures tend to offer them up over HTTP and providing functionality that stands neatly alone.  (I make the distinction of valuable architectures since I see a lot of shops just call whatever they happen to deliver a microservices architecture.)

For example, a company may offer a customer onboarding microservice.  It can stand alone to create new customers.  But clients of this service, internal and external, may use it to compose larger, more feature-rich pieces of functionality.

So, having defined the architectural style, let’s talk about the problems it solves.

Read More

By

Getting Started with Behavior-Driven Development

Editorial note: I originally wrote this post for the TechTown blog.  You can check it out here, at their site.  While you’re there, have a look around at the different training courses they offer.

You’ve probably heard of behavior-driven development (BDD).  However, if you’ve never practiced it, you may perceive it as one of many in a nebulous cloud of acronyms.  We have BDD, TDD, DDD, and ATDD.  All of these have a “D” standing for “driven” and another one standing for either “development” or “design.”  Apparently, we software developers really like things to drive us.

I won’t engage in a full “DD” taxonomy here, as this post concerns itself with behavior-driven development only.  But we will need to take a tour through one of these in order to understand BDD’s motivations and backstory.

Behavior-Driven Development Origins and Motivation

To understand BDD, we must first understand test-driven development (TDD).  Luckily, I wrote a recent primer on that.  To recap briefly, TDD calls for you to address every new thing you want your production code to do by first writing a failing test.  Doing this both verifies that the system currently lacks the needed functionality and gives you a way to later know that you’ve successfully implemented it.

With TDD, you deal in microtests.  These distinguish themselves by being quite specific and granular.  For instance, you might assert that you get a null reference exception when invoking a method with a null parameter.  You’ll pardon non-technical project stakeholders for a distinct lack of interest in these microtests.

BDD evolved from asking the question, “Why don’t we do this for tests that the business might care about?”  It follows the same philosophical approach and logic.  But instead of worrying about null parameters and exceptions, these tests address the system’s behavior at the capability or feature level.

Behavior-driven development follows the TDD cadence: express a current system deficiency with a failing test. But this time the failing test is, for example, when I deposit money into my checking account, I can see the reflected balance.  Work then proceeds on that feature until the test passes.  At this time, the team considers the card complete.

Read More

By

How You’re Probably Misunderstanding TDD

Editorial note: I originally wrote this post for the TechTown blog.  You can check out the original here, at their site.  While you’re there, take a look around at their training courses.  

Let’s get something out of the way right up front.  You might have extensive experience with test driven development (TDD).  You might even practice it routinely and wear the red-green-refactor cadence like a comfortable work glove.  For all I know, you could be a bonafide TDD expert.

If any of that describes you, then you probably don’t actually misunderstand TDD.  Not surprisingly, people that become adept with it, live, and breathe it tend to get it.  But if that introductory paragraph doesn’t describe you, then you probably have some misconceptions.

I earn my living doing a combination of training and consulting.  This affords me the opportunity to visit a lot of shops and talk to a lot of people.  And during the course of these visits and talks, I’ve noticed an interesting phenomenon.  Ask people why they choose not to use TDD, and you rarely hear a frank, “I haven’t learned how.”

Instead, you tend to hear dismissals of the practice.  And these dismissals generally arise not from practiced familiarity, but from misunderstanding TDD.  While I can’t discount the possibility, I can say that I’ve never personally witnessed someone demonstrate an expert understanding of the practice while also dismissing its value.  Rather, they base the dismissal on misconception.

So if you’ve decided up-front that TDD isn’t for you, first be sure you’re not falling victim to one of these misunderstandings.

Read More