DaedTech

Stories about Software

By

Why Production Monitoring Can Come Too Late

Editorial Note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, have a look around at how their offering can help you hunt down issues from development to production.

I’ve spent a number of years, now, writing software.  At the risk of dating myself, I worked on software in the early 2000s.  Back then, you couldn’t take quite as much for granted.  For example, while organizations considered source control a good practice, forgoing it wouldn’t have constituted lunacy the way it does today.

As a result of the different in standards, my life shipping software looked different back then.  Only avant garde organizations adopted agile methodologies, so software releases happened on the order of months or years.  We thus reasoned about the life of software in discrete phases.  But I’m not talking about the regimented phases of the so-called “waterfall” methodology.  Rather, I generalize it to these phases: build, prep, run.

During build, you mainly solved the problem of cranking through the requirements as quickly as possible.  Next up, during prep, you took this gigantic sprawl of code that only worked on dev machines, and started to package it into some kind of deployable product.  This might have meant early web servers or even CDs at the time.  And, finally, came run.  During run phase, you’d maintain vigilance, waiting for customer issues to come streaming in.

Bear in mind that we would, of course, work to minimize bugs and issues during all of these phases.  But at that time with most organizations, having issues during the “run phase” constituted a good problem to have.  After all, it meant you had reached the run phase.  A shocking amount of software never made it that far.

Monitoring and Software Maturity

We’ve come a long way.  As I alluded to earlier, you’d get some pretty incredulous looks these days for not using source control.  And you would likewise receive incredulous looks for a release cycle spanning years, divided into completely disjoint phases.  Relatively few shops view their applications’ production behavior as a hypothetical problem for a far-off date anymore.

We’ve arrived at this point via some gradual, hard-won victories over the years.  These have addressed the phases I mentioned and merged them together.  Organizations have increasingly tightened the feedback loop with the adoption of agile methodologies.  Alongside that, vastly improved build and deployment tooling has transformed “the build” from “that thing we do for weeks at the end” to “that thing that happens with every commit.”  And, of course, we’ve gotten much, much better at supporting software in production.

Back in the days of shrink-wrap software and shipping CDs, users reported problems via phone call.  For a solution, they developed workarounds and waited for a patch CD in the mail.  These days, always-connected devices allow for patches with arbitrary quickness.  And we have software that gets out in front of production issues, often finding them even before users do.

Specifically, we now have sophisticated production monitoring software.  In some cases, this means simply watching for outages and supplying alerts.  But we also have sophisticated application performance monitoring (APM) capabilities.  As I said, we’ve come a long way.

Read More

By

Integrating APM into Your Testing Strategy

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, have a look at their tooling to help you with your APM needs.

Does your team have a testing strategy?  In 2017, I have a hard time imagining that wouldn’t at least have some kind of strategy, however rudimentary.  Unlike a couple of decades ago, you hear less and less about people just changing code on the production server and hoping for the best.

At the very least, you probably have a QA group, or at least someone who serves in that role prior to shipping your software.  You write the code, do something to test it, and then ship it once the testers bless it (or at least notate “known issues”).

From there, things probably run the gamut among those of you reading.  Some of you probably do what I’ve described and little more.  Some of you probably have multiple pre-production environments to which a continuous integration setup automatically deploys builds.  Of course, it only deploys those builds assuming all automated unit, integration and smoke tests pass and assuming that your static analysis doesn’t flag any show stopper issues.  Once deployed, a team of highly skilled testers perform exploratory testing.  Or, maybe, you do something somewhere in between.

But, whatever you do, you can always do more.  In fact, I encourage you always to look for new ways to test.  And today I’d like to talk about an idea for just such a thing.  Specifically, I think you can leverage application performance management (APM) software to help your testing efforts.  I say this in spite of the fact that most shops have traditionally taken advantage of these tools only in production.

Read More

By

How to Evaluate Software Quality from the Outside In

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, take a look at Prefix and Retrace, for all of your prod support needs.

In a sense, application code serves as the great organizational equalizer.  Large or small, complex or simple, enterprise or startup, all organizations with in-house software wrangle with similar issues.  Oh, don’t misunderstand.  I realize that some shops write web apps, others mobile apps, and still others hodgepodge line of business software.  But when you peel away domain, interaction, and delivery mechanism, software is, well, software.

And so I find myself having similar conversations with a wide variety of people, in a wide variety of organizations.  I should probably explain just a bit about myself.  I work as an independent IT management consultant.  But these days, I have a very specific specialty.  Companies call me to do custom static analysis on their codebases or application portfolios, and then to present the findings to leadership as the basis for important strategic decisions.

As a simple example, imagine a CIO contemplating the fate of a 10 year old Java codebase.  She might call me and ask, “should I evolve this to meet our upcoming needs, or would starting from scratch prove more cost effective in the long run.”  I would then do an assessment where I treated the code as data and quantified things like dependence on outdated libraries (as an over-simplified example).  From there, I’d present a quantitatively-driven recommendation.

So you can probably imagine how I might call code a great equalizer.  It may do different things, but it has common structural, underpinnings that I can quantify.  When I show up, it also has another commonality.  Something about it prompts the business to feel dissatisfied.  I only get calls when the business has experienced bad outcomes as measured by software quality from the outside in.

Read More