DaedTech

Stories about Software

By

Fundamentals of Web Application Performance Testing

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, take a look at their offering that can help you with your own performance testing.

Software development, as a profession, has evolved in fits and starts over the years.  When I think back a couple of decades, I find myself a little amazed.  During the infancy of the web, hand-coding PHP (or PERL) live on a production machine seemed perfectly fine.

At first blush, that might just seem like sloppiness.  But don’t forget that stakes were much lower at the time.  Messing up a site that displayed song lyrics for a few minutes didn’t matter very much.  Web developers of the time had much less incentive to install pre-production verification processes.  Just make the changes and see if anything breaks.  Anything you don’t catch, your users will.

The Evolution of Web Application Testing

Of course, that attitude couldn’t survive much beyond the early days of dynamic web content.  As soon as e-commerce gained steam in the web development world, the stakes went up.  Amateurs walked the tightrope of production edits while professional shops started to create and test in development or sandbox environments.

As I said initially, this didn’t happen in some kind of uniform move.  Instead, it happened in fits and starts.  Some lagged behind the curve, continuing to rely on their users for testing.  Others moved testing into sandbox environments and pushed the envelope besides.  They began to automate.

Web development then took another step forward as automation worked its way into the testing strategy.  Sophisticated shops had their QA environments as a check on production releases.  But their developers also began to build automated test suites.  They then used these to guard against regression tests and to ensure proper application behavior.

Eventually, testing matured to a point where it spread out beyond straightforward unit test suites and record-playback-style integration tests.  Organizations got to know the so-called test pyramid.  They built increasingly sophisticated, nuanced test suites.

Web Application Testing Today

Building upon all of this backstory, we’ve seen the rise of the DevOps movement in recent years.  This movement emphasizes automating the entire delivery pipeline, from written code to production functioning.  So stakes for automated testing are higher than ever.  The only way to automate the whole thing is to have bulletproof verification.

This new dynamic shines a light on an oft-ignored element of the testing strategy.  I’m talking specifically about performance testing for your web application.  Automated unit and acceptance testing has long since become a de facto standard.  But now automated performance testing is getting to that point.

Think about it.  We got burned by hand-editing code on the production server.  So we set up sandboxes and tested manually.  Our applications grew too complex for manual testing to handle.  So we built test suites and automated these checks.  We needed production rolls more frequently.  So we automated the deployment process.  Now, we push code efficiently through build, test, and deployment.  But we don’t know how it will behave in the wild.

Web application performance testing fixes that.  If you don’t yet have such a strategy, you need one.  So let’s take a look at the fundamentals for adding this to your testing approach.  And I’ll keep this general enough to apply to your tech stack, whatever it may be.

Read More

By

The Different Pair Programming Styles

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, check out their products that help you wrangle production issues in a hurry.

The world of professional programming produces some pretty intense debates.  For example, take a look at discussions about whether and how to comment code.  We have a hard time settling such debates because studying professional programming scientifically is hard.  We can’t really ask major companies to build the same software twice, using one control group and one experimental group.  So we muddle through with lots of anecdotes and opinions and relatively scant empirical data.  Because of this conundrum, I want to talk today about pair programming styles rather than taking a stance on whether you should pair program or not.

I’ve talked previously about the benefits of pair programming from the business’s perspective.  But I concluded that post the same way that I’m introducing this one.  You can realize benefits, but you have to evaluate whether it makes sense for you or not.  To make a good evaluation, you should understand the different pair programming styles and how they work.

That’s right.  Pair programming involves more than just throwing two people together and telling them to go nuts.  Over the years, practitioners have developed techniques to employ in different situations.  Through practice and experimentation, they have improved upon and refined these techniques.

The Effect of Proficiency on Pair Programming Styles

Before looking at the actual protocols, let’s take a brief detour through the idea of varied developer skill levels.  Although we have a seemingly unique penchant for expressing our skill granularly, I’ll offer just two developer skill levels: novice and expert.  I know, I know.  But those two will keep complexity to a minimum and serve well for explaining the different pairing models.

With our two skill levels in mind, consider the three possible pairing combinations:

  • Expert-Expert
  • Expert-Novice
  • Novice-Novice

Now when I talk about expertise here, bear in mind that this accounts for context and not just general industry experience.  Tech stack, codebase familiarity, and even domain knowledge matter here.  I have two CS degrees and years of experience in several OOP languages.  But if I onboarded with your GoLang team tomorrow, you could put me safely in the novice camp until I got my bearings.

Each of these pairing models has its advantages and disadvantages.  Sometimes, however, fate may force your hand, depending on who is available.  Understanding the different pairing models will help you be effective when it does.  It also bears mentioning that novice-novice pairings offer a great deal of learning for both novices, but with risk.  Therefore, the suitability of such a pairing depends more on your appetite for risk than the pairing model.

Read More

By

What Is Performance Testing? An Explanation for Business People

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, take a look at their comprehensive solution for gaining insight into your application’s performance.

The world of enterprise IT neatly divides concerns between two camps: IT and the business.  Technical?  Then you belong in the IT camp.  Otherwise, you belong in the business camp.

Because of this division, an entire series of positions exists to help these groups communicate.  But since I don’t have any business analysts at my disposal to interview, I’ll just bridge the gap myself today.  From the perspective of technical folks, I’ll explain performance testing in language that matters to the business.  And, don’t worry.  I’ve spent enough years doing IT management consulting to have mostly overcome the curse of knowledge.  I can empathize with anyone understanding performance testing only in the vaguest of terms.

A Tale of Vague Woe

To prove it, let me conjure up a maddening hypothetical.  You’ve coordinated with the software organizations for months.  This has included capturing the voice of the customer and working with architects and developers to create a vision for the product.  You’ve seen the project through conception, beta testing and eventual production release.  And you’re pretty proud of the way this has gone.

But now you find yourself more than a little exasperated.  A few scant months after the production launch, weird and embarrassing things continue to go wrong.  It started out as a trickle and then grew into a disturbing, steady stream.  Some users report agonizingly slow experiences while others don’t complain.  Some report seeing weird error messages and screens, while others have a normal experience.  And, of course, when you try to corroborate these accounts, you don’t see them.

You inquire with the software developers and teams, but you can tell they don’t quite take this at face value.  “Hmmm, that shouldn’t happen,” they tell you.  Then they concede that maybe it could, but they shrug and say there’s not much they can do unless you help them reproduce the issue.  Besides, you know users, amirite?  Always reporting false issues because they have unrealistic expectations.

Sometimes you wonder if the developers don’t have the right of it.  But you know you’re not imagining the exasperated phone calls and negative social media interactions.  Worse, paying users are leaving, and fewer new ones sign up.  Whether perception or reality, that hits you in the pocketbook.

Read More

By

Pair Programming Benefits: The Business Rationale

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, have a look at their Retrace product that consolidates all of your production monitoring needs into one tool.

During the course of my work as a consultant, I wind up working with many companies adopting agile practices, most commonly following Scrum.  Some of these practices they embrace easily, such as continuous integration.  Others cause some consternation.  But perhaps no practice furrows more brows in management than pair programming.  Whatever pair programming benefits they can imagine, they always harbor a predictable objection.

Why would I pay two people to do one job?

Of course, they may not state it quite this bluntly (though many do).  They may talk more generally in terms of waste and inefficiency.  Or perhaps they offer tepid objections related to logistical concerns.  Doesn’t each requirement need one and only one owner?  But in almost all cases, it amounts to the same essential source of discomfort.

I believe this has its roots in early management theories, such as scientific management.  These gave rise to the notion of workplaces as complex systems, wherein managers deployed workers as resources intended to perform tasks repetitively and efficiently.  Classic management theory wants individual workers at full utilization.  Give them a task, have them specialize in it, and let them realize efficiency through that specialty.

Knowledge Work as a Wrinkle

Historically, this made sense.  And it made particular sense for manufacturing operations with global focus.  These organizations took advantage of hyper-specialty to realize economies of scale, which they parlayed into a competitive advantage.

But fast forward to 2017 and think of workers writing software instead of assembling cars.  Software developers do something called knowledge work, which has a much different efficiency profile than manual labor.  While you wouldn’t reasonably pay two people to pair up operating one shovel to dig a ditch, you might pay them to pair up and solve a mental puzzle.

So while the atavistic aversion to pairing makes sense given our history, we should move past that in modern software development.

To convince reticent managers to at least hear me out, I ask them to engage in a thought exercise.  Do they hire software developers based on how many words per minute they can type?  What about how many lines of code per hour they can crank out?  Neither of these things?

These questions have obvious answers.  After I hear those answers, I ask them to concede that software development involves more thinking than typing.  Once they concede that point, the entrenched idea of attacking a problem with two people as wasteful becomes a little less entrenched.  And that’s a start.

Read More

By

How to Evaluate Software Quality from Source Code

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, take a look at their Retrace offering that gives you everything you need to track down production issues.

I’ll understand if you read the title of this post and smirked.  I probably would have done so, opening it up only to see what profound wisdom awaited me.  Review the code, Captain Obvious.  

So yes, rest assured, I understand the easy assumption that one can ascertain a codebase’s quality by opening it up and starting to review it.  But what does this really tell you?  What comes out of this activity?  Simply put, your opinion of the codebase’s quality comes out of this activity.

I actually have a consulting practice doing custom static analysis on client codebases.  I help managers and executives make strategic decisions about their applications.  And I do this largely by treating their code as data and building numerically based cases.

Initially, the idea for this practice arose out of some observations I’d made a while back.  I watched consultants tasked with critical evaluations of codebases, and I found that they did exactly what I mentioned in the first paragraph.  They reviewed it, operating from the premise, I’m an expert, so I’ll record my anecdotal impressions and offer them as evidence.  That put a metaphorical pebble in my shoe and bothered me.  So I decided to chase a more empirical concept of code quality with my practice.

Don’t get me wrong.  The proprietary nature of source code and outcome data in the industry makes truly scientific experiments difficult.  But I can still automate the inquiries and use actual, relative data to compare properties.  So from that perspective, I’ll offer you more data-driven ways to evaluate software quality from source code.

Read More