DaedTech

Stories about Software

By

Automated Code Review to Help with the Unknowns of Offshore Work

Editorial note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, take a look at CodeIt.Right, which offers automated code review feedback.

I like variety.  In pursuit of this preference, I spend some time management consulting with enterprise clients and some time volunteering for “office hours” at a startup incubator.  Generally, this amounts to serving as “rent-a-CTO” for for startup founders in half hour blocks.  This provides me with the spice of life, I guess.

As disparate as these advice forums might seem, they often share a common theme.  Both in the impressive enterprise buildings and the startup incubator conference rooms, people ask me about offshoring application development.  To go overseas or not to go overseas?  That, quite frequently, is the question (posed to me).

I find this pretty difficult to answer absent additional information.  In any context, people asking this bake two core assumptions into their question.  What they really want to say would sound more like this.  “Will I suffer for the choice to sacrifice quality to save money?”

They assume first that cheaper offshore work means lower quality.  And then they assume that you can trade quality for cost as if adjusting the volume dial in your car.  If only life worked this simply.

What You Know When You Offshore

Before going further, let’s back up a bit.  I want to talk about what you actually know when you make the decision to pay overseas firms a lower rate to build software.  But first, let’s dispel these assumptions that nobody can really justify.

Understand something unequivocally.  You cannot simply exchange units of “quality” for currency.  If you ask me to build you a web app, and I tell you that I’ll do it for $30,000, you can’t simply say, “I’ll give you $15,000 to build one half as good.”  I mean, you could say that.  But you’d be saying something absurd, and you know it.  You can reasonably adjust cost by cutting scope, but not by assuming that “half as good” means “twice as fast.”

Also, you need to understand that “cheap overseas labor” doesn’t necessarily mean lower quality.  Frequently it does, but not always.  And, not even frequently enough that you can just bank on it.

So what do you actually know when you contract with an inexpensive, overseas provider?  Not a lot, actually.  But you do know that your partner will work with you mainly remotely, across a great deal of distance, and with significant communication obstacles.  You will not collaborate as closely with them as you would with an employee or an local vendor.

Read More

By

What Problems Do Microservices Solve?

Editorial note: I originally wrote this post for the TechTown blog.  You can check out the original here, at their site.  While you’re there, have a look at the tech courses they offer.

Do you find that certain industry buzzwords set your teeth on edge?  If so, I assure you that you have company.  Buzzwords permeate every professional space, but it seems that tech really knows how to attract them.  Internet of things.  The cloud.  Big data. DevOps.  Agile and lean.  And yes, microservices.

Because of our industry’s propensity for buzzwords, Gartner created something it calls the hype cycle.  It helps their readers and clients evaluate how much attention to pay to emergent ideas.  They can then separate vague fluff from ideas that have arrived to stay.  And let’s be honest — it’s also a funny, cathartic concept.

If you’ve tired of hearing the term microservices, I can understand that.  As of 2016, Gartner put it at the peak of inflated expectations.  This means that the term had achieved maximum saturation a year ago, and our collective fatigue will drive it into the trough of disillusionment.

And yet the concept retains value.  Once the hype fades and it makes its way toward the plateau of productivity, you’ll want to understand when, how, and why to use it.  So in a nod toward pragmatism, I’m going to talk about microservices in terms of the problems that they solve.

First, What Are Microservices?

Before going any further, let me offer a specific definition.  After all, relying on vague, hand-waving definitions is the main culprit in buzzword fatigue.  I certainly don’t want to contribute to that.

Industry thought leader Martin Fowler offers a detailed treatment of the subject.

In short, the microservice architectural style [1] is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery.

Now, understand something.  The architectural trade-off here is nothing new.  In essence, it describes centralizing intelligence versus distributing it.  With a so-called monolith, clients have it easy.  They call the monolith, which handles all details internally.  When you distribute intelligence, on the other hand, clients have more burden to figure out how to compose calls and interactions.

The relative uniqueness of the microservices movement comes from taking that tradeoff and layering atop it delivery mechanisms and the concept of atomic business value.  Organizations touting valuable microservices architectures tend to offer them up over HTTP and providing functionality that stands neatly alone.  (I make the distinction of valuable architectures since I see a lot of shops just call whatever they happen to deliver a microservices architecture.)

For example, a company may offer a customer onboarding microservice.  It can stand alone to create new customers.  But clients of this service, internal and external, may use it to compose larger, more feature-rich pieces of functionality.

So, having defined the architectural style, let’s talk about the problems it solves.

Read More

By

Getting Started with Behavior-Driven Development

Editorial note: I originally wrote this post for the TechTown blog.  You can check it out here, at their site.  While you’re there, have a look around at the different training courses they offer.

You’ve probably heard of behavior-driven development (BDD).  However, if you’ve never practiced it, you may perceive it as one of many in a nebulous cloud of acronyms.  We have BDD, TDD, DDD, and ATDD.  All of these have a “D” standing for “driven” and another one standing for either “development” or “design.”  Apparently, we software developers really like things to drive us.

I won’t engage in a full “DD” taxonomy here, as this post concerns itself with behavior-driven development only.  But we will need to take a tour through one of these in order to understand BDD’s motivations and backstory.

Behavior-Driven Development Origins and Motivation

To understand BDD, we must first understand test-driven development (TDD).  Luckily, I wrote a recent primer on that.  To recap briefly, TDD calls for you to address every new thing you want your production code to do by first writing a failing test.  Doing this both verifies that the system currently lacks the needed functionality and gives you a way to later know that you’ve successfully implemented it.

With TDD, you deal in microtests.  These distinguish themselves by being quite specific and granular.  For instance, you might assert that you get a null reference exception when invoking a method with a null parameter.  You’ll pardon non-technical project stakeholders for a distinct lack of interest in these microtests.

BDD evolved from asking the question, “Why don’t we do this for tests that the business might care about?”  It follows the same philosophical approach and logic.  But instead of worrying about null parameters and exceptions, these tests address the system’s behavior at the capability or feature level.

Behavior-driven development follows the TDD cadence: express a current system deficiency with a failing test. But this time the failing test is, for example, when I deposit money into my checking account, I can see the reflected balance.  Work then proceeds on that feature until the test passes.  At this time, the team considers the card complete.

Read More

By

Is There a Correct Way to Comment Your Code?

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, take a look at all of the visualizations and metrics that you can get about your codebase.

Given that I both consult and do a number of public things (like blogging), I field a lot of questions.  As a result, the subject of code comments comes up from time to time.  I’ll offer my take on the correct way to comment code.  But remember that I am a consultant, so I always have a knee-jerk response to say that it depends.

Before we get to my take, though, let’s go watch programmers do what we love to do on subjects like this: argue angrily.  On the subject of comments, programmers seem to fall roughly into two camps.  These include the “clean code needs no comments” camp and the “professionalism means commenting” camp.  To wit:

Chances are, if you need to comment then something needs to be refactored. If that which needs to be refactored is not under your control then the comment is warranted.

And then, on the other side:

If you’re seriously questioning the value of writing comments, then I’d have to include you in the group of “junior programmers,” too.  Comments are absolutely crucial.

Things would probably go downhill from there fast, except that people curate Stack Overflow against overt squabbling.

Splitting the Difference on Commenting

Whenever two sides entrench on a matter, diplomats of the community seek to find common ground.  When it comes to code comments, this generally takes the form of adages about expressing the why in comments.  For example, consider this pithy rule of thumb from the Stack Overflow thread.

Good programmers comment their code.

Great programmers tell you why a particular implementation was chosen.

Master programmers tell you why other implementations were not chosen.

Jeff Atwood has addressed this subject a few different times.

When you’ve rewritten, refactored, and rearchitected your code a dozen times to make it easy for your fellow developers to read and understand — when you can’t possibly imagine any conceivable way your code could be changed to become more straightforward and obvious — then, and only then, should you feel compelled to add a comment explaining what your code does.

Junior developers rely on comments to tell the story when they should be relying on the code to tell the story.

And so, as with any middle ground compromise, both entrenched sides have something to like (and hate).  Thus, you might say that whatever consensus exists among programmers, it leans toward a “correct way” that involves commenting about why.

Read More

By

How to Evaluate Software Quality from Source Code

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, take a look at their Retrace offering that gives you everything you need to track down production issues.

I’ll understand if you read the title of this post and smirked.  I probably would have done so, opening it up only to see what profound wisdom awaited me.  Review the code, Captain Obvious.  

So yes, rest assured, I understand the easy assumption that one can ascertain a codebase’s quality by opening it up and starting to review it.  But what does this really tell you?  What comes out of this activity?  Simply put, your opinion of the codebase’s quality comes out of this activity.

I actually have a consulting practice doing custom static analysis on client codebases.  I help managers and executives make strategic decisions about their applications.  And I do this largely by treating their code as data and building numerically based cases.

Initially, the idea for this practice arose out of some observations I’d made a while back.  I watched consultants tasked with critical evaluations of codebases, and I found that they did exactly what I mentioned in the first paragraph.  They reviewed it, operating from the premise, I’m an expert, so I’ll record my anecdotal impressions and offer them as evidence.  That put a metaphorical pebble in my shoe and bothered me.  So I decided to chase a more empirical concept of code quality with my practice.

Don’t get me wrong.  The proprietary nature of source code and outcome data in the industry makes truly scientific experiments difficult.  But I can still automate the inquiries and use actual, relative data to compare properties.  So from that perspective, I’ll offer you more data-driven ways to evaluate software quality from source code.

Read More