DaedTech

Stories about Software

By

Software Monitoring: The Things that Might Interest You

Editorial note: I originally wrote this post for the Monitis blog.  You can check out the original here, at their site.  While you’re there, have a look at the different sorts of production concerns that you can keep an eye on with their offering, some of which I address in this post.

If you have responsibility for software in production, I bet you’d like to know more about it.  I don’t mean that you’d like an extra peek into the bowels of the source code or to understand its philosophical place in the universe.  Rather, I bet you’d like to know more about how it behaves in the wild.

After all, from this opaque vantage point comes the overwhelming majority of maddening defects.  “But it doesn’t do that in our environment,” you cry.  “How can we even begin to track down a user report of, ‘sometimes that button doesn’t work right?'”

To combat this situation we have, since programmer time immemorial, turned to the log file.  In that file, we find answers.  Except, we find them the way an archaeologist finds answers about ancient civilizations.  We assemble cryptic, incomplete fragments and try to use them to deduce what happened long after the fact.  Better than nothing, but not great.

Because of the incompleteness and the lag, we seek other solutions.  With the rise in sophistication of tooling and the growth of the DevOps movement, we close the timing gap via monitoring.  Rather than wait for a user to report an error and asking for a log file, we get out in front of the matter.  When something flies off the rails, our monitoring tools quickly alert us, and we begin triage immediately.

Common Monitoring Use Cases

Later in this post, I will get imaginative.  In writing this, I intend to expose you to some less common monitoring ideas that you might at least contemplate, if not outright implement.  But for now, let’s consider some relative blue chip monitoring scenarios.  These will transcend even the basic nature of the application and apply equally well to web, mobile, or desktop apps.

Monitis offers a huge variety of monitoring services, as the name implies.  You can get your bearings about the full offering here.  This means that if you want to do it, you can probably find an offering of theirs to do it, unless you’re really out there.  Then you might want to supplement their offering with some customized functionality for your own situation.

But let’s say you’d just signed up for the service and wanted to test drive it.  I can think of nothing simpler than “is this thing on?”  Wherever it runs, you’d love some information about whether it runs when it should.  On top of that, you’d probably also like to know whether it dies unexpectedly and ignobly.  When your app crashes embarrassingly, you want to know about it.

Once you’ve buttoned up the real basics, you might start to monitor for somewhat more nuanced situations.  Does your code gobble up too many hardware resources, causing poor experience or added expense?  Does it interact with services or databases that fail or go offline?  In short, does your application wobble into sub-optimal states?

But what if we look beyond those basics?  Let’s explore some things you may never have contemplated monitoring about your software.

Read More

By

What DevOps Means for Static Analysis

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, have a look at NDepend’s static analysis offering.

For most of my career, software development has, in a very specific way, resembled mailing a letter.  You write the thing, and then you go through the standard mail piece rigmarole.  This involves putting it into an envelope, addressing the envelope, putting a stamp on, it and then walking it over to the mailbox.  From there, you stuff it into the mailbox.

At this point, you might as well have dropped the thing into some kind of rip in space-time for all you understand what comes next.  Off it goes into the ether, and you hope that it arrives at its destination through some kind of logistical magic.  So it has generally gone with software.

We design it, architect, and lovingly write it.  We package it up, test it, correct defects in it, and then we call it done.  From there, we fire it into the mailbox-black-hole of the software world: operations.  They take it and deploy it, or whatever, and then, by some magic we don’t concern ourselves about, it runs in the real world.  Or so it has generally gone.

Problems with the Traditional Approach

With the benefit of hindsight, you can probably guess the main problem with this state of affairs.  So rather than enumerate it dryly in a series of bullet points, let me offer it up in story format.

You work as an application developer in some very large enterprise.  There, you build web apps.  And you take pride in your work.  You write clean code, you maintain the unit test suite, you collaborate dutifully with QA, and you generally do your best.

In fact, this effort even extends beyond your own dev environment and into as many environment as you can see.  You run load, smoke, and integration tests in QA and sandbox environment.  And, as a whole unit, your team does everything it can to ensure the integrity of the work.  But beyond the pre-prod environment, the fate of your application becomes an utter mystery.  Some group of folks located in a different timezone take it from there.  You wish it well as it heads to production.

And then, one day, six months later, you get some incident report.  Apparently, some guy in Hungary or somewhere was doing something when somehow he get a null reference exception.  But don’t worry, here’s a brief description of what he said and a few thousand lines of some random log file.  Good luck with your repro!

Read More

By

APIs and the Principle of Least Surprise

Editorial note: I originally wrote this post for the Monitis blog.  You can check out the original here, at their site.  While you’re there, have a look at some of the other authors for their blog.

I remember something pretty random about my first job.  In my cubicle, I had a large set of metal shelves that held my various and sundry programming texts.  And, featured prominently on that shelf, I had an enormous blue binder.

Back then, I spent my days writing drivers and custom Linux kernel modules.  I had to because we made use of a real time interface to write very precisely timed machine control code.  As you might imagine, a custom Linux kernel in 2005 didn’t exactly come with a high production quality video walking users through the finer points.  In fact, it came with only its version of the iconic Linux “man pages” for guidance.  These I printed out and put into the aforementioned blue binder.

I cannot begin to tell you how much I studied this blue binder.  I pored through it for wisdom and clues, feeling a sense of great satisfaction when I deciphered some cryptic function example.  This sort of satisfaction defined a culture, in fact.  You wore mastery of a difficult API as a badge of honor.  And, on the flip side, failure to master an API represented a failure of yours.

Death of “Manual Culture”

What a difference a decade makes.  No longer do we have battleship gray windows applications with dozens of menus and sub-menus, with hundreds of settings and thousands of “advanced settings”.  No longer do we consider reading a gigantic blue documentation binder to be a use of time.  And, more generally, no longer do we put the onus of navigating a learning curve on the user.  Instead, we look to lure users by making things as easy as possible.

Ten years ago, a coarse expression described people’s take on this responsibility.  I’ll offer the safe-for-work version: “RTM” or “Read the Manual.”  Ten years later, we have seen the death of RTM culture.  This applies to APIs and to user experiences in general.

Read More

By

A Look at the History of RDBMS

Editorial Note: I originally wrote this post for the Monitis blog.  You can check out the original here, at their site.  While you’re there, take a look at Monitis’s offering for all things related to website and networking monitoring.

If you had to pick a unifying technology to bring all developers together, then you could do worse than selecting the relational database.  Of course, no topic can truly unify all developers.  But most of us that have written code for any length of time, have at least dealt with a database in some capacity or another.

And, why not?  We could boil software down to two core components: data and behavior.  So, just as we all learn programming languages to express behavior, we also learn some means of recording and persisting our precious data.

When we put enough of this data together in some organized format, we have a database.  When we organize that database in a manner known as “relational,” we have a relational database.  And then, when we add functionality for managing and optimizing access to that relational data, we have a relational database management system (RDBMS).

No doubt you have some familiarity with these products.  They include such industry mainstays as Oracle, Microsoft’s SQL Server, PostgreSQL, and MySQL, among others.

In fact, they blend so seamlessly into the scenery that you can easily take them for granted.  But where did they come from and why?  And, how have they evolved over the years?  Today, let’s take a look back at the history of the RDBMS.

Read More

By

How to Analyze a Static Analyzer

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, take a look around at some of the other posts, and sign up for the RSS feed, if you’re so inclined.

First things first.  I really wanted to call this post, “who will analyze the analyzer,” because I fancy myself clever.  This title would have mirrored the relatively famous Latin question from Satires, “who will guard the guards themselves?”  But I suspect that the confusion I’d cause with that title would outweigh any appreciation of my cleverness.

So, without any literary references whatsoever, I’ll talk about static analyzers.  More specifically, I’ll talk about how you should analyze them to determine fitness for your purpose.

Before I dive into that, however, let’s do a quick refresher on the definition of static analyzer.  This stack overflow question nails it pretty well, right at the beginning of the accepted answer.

Analyzing code without executing it. Generally used to find bugs or ensure conformance to coding guidelines.

Succinctly put, Aaron, and just so.  Most of what we do with code tends to be dynamic analysis.  Whether through automated tests or manual running of the program, we fire it up and see what happens.  Static analyzers, on the other hand, look at the code and use it to make deductions.  These include both deductions about runtime behavior and about the codebase itself.

What’s Your Goal?

Why rehash the definition?  Well, because I want to underscore the point that you can do many different things with static analyzers.  Even if you just think of them as “that thing that complains at me about the Microsoft guidelines,” they cover a whole lot more ground.

As such, your first step in sizing up the field involves setting your own goals.  What do you want out of the tool?  Some of them focus exclusively on code quality.  Others target specific concerns, such as behavioral correctness or security.  Still others simply offer so-called “linting.”  Some do a mix of many things.

Lay out your goals and expectations.  Once you’ve done that, you will find that you’ve narrowed the field considerably.  From there, you can proceed with a more apples to apples comparison.

Read More