DaedTech

Stories about Software

By

The First DaedTech Digest

I mentioned this idea in a post I wrote the other day, the idea of a digest style of post.  So today, I’d like to give it a try.

You see this sort of thing a lot, all over the place.  So-called planet sites have been around for a long time, aggregating community-related articles into a single place.  Examples include one of my personal favorites, the Morning Brew.

There’s just one difference in what I’m proposing.  Instead of gathering stuff that others have written, I’m going to digest the stuff that I’ve written.  In the last year, we’ve turned my paid blogging for other sites into a tech content business, taking blogging from a side hustle and hobby to a professional gig.  So, I write a lot of blog posts.

Historically, I’ve simply cross posted these with canonical linking, leading in with “editorial note: I originally wrote this post for…”  But I’m thinking of taking DaedTech in a bit of a different direction than just generalized software-oriented blog posts.  More on that later.

The point here is that, instead of pushing one of these cross posts out per day, I’m going to do a single digest post per week containing posts that I’ve made.  I have about 90 backlogged drafts in my folder, so at first it’s going to be posts I made some months back.  But sooner or later, I’ll catch up and give the posts I’ve published in the last week.

But anyway, without further ado, here’s the digest.

Some Posts to Check Out

  • This is a piece that I wrote for the Monitis blog.  It’s about threat modeling and the woes of being an e-retailer and guarding yourself against criminals and ne’er do wells.
  • I wrote a post for TechTown that was a primer about unit testing in C#.  It gives you a back to basics explanation, the value proposition, and the simplest imaginable examples of writing unit tests.
  • This is another post that I wrote for Monitis. It’s about the C# IEnumerable construct and how, if you misunderstand it, you can kill your site’s performance.  This has to do with how IEnumerable can encapsulate deferred execution, and that it only promises a strategy for obtaining items, rather than giving you those items.
  • I wrote this post for SubMain.  It’s about how something that’s seemingly inconsequential — spell checking your code (specifically, C#) is more important than you might think.  There are subtle things to consider that you might not have considered.
  • This post is actually going to become part of Microsoft’s official documentation!  Seriously, no kidding.  Bill Wagner wrote to Patrick and I about this post, and it’s now in their documentation build on Github.  Anyway, I wrote it for NDepend, and it’s a walk back through past major version of C#, reflecting back on nearly 2 decades of the language.  It was a fun journey down memory lane.

And, that’s it.   Happy reading, and happy Friday!

By

Fundamentals of Web Application Performance Testing

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, take a look at their offering that can help you with your own performance testing.

Software development, as a profession, has evolved in fits and starts over the years.  When I think back a couple of decades, I find myself a little amazed.  During the infancy of the web, hand-coding PHP (or PERL) live on a production machine seemed perfectly fine.

At first blush, that might just seem like sloppiness.  But don’t forget that stakes were much lower at the time.  Messing up a site that displayed song lyrics for a few minutes didn’t matter very much.  Web developers of the time had much less incentive to install pre-production verification processes.  Just make the changes and see if anything breaks.  Anything you don’t catch, your users will.

The Evolution of Web Application Testing

Of course, that attitude couldn’t survive much beyond the early days of dynamic web content.  As soon as e-commerce gained steam in the web development world, the stakes went up.  Amateurs walked the tightrope of production edits while professional shops started to create and test in development or sandbox environments.

As I said initially, this didn’t happen in some kind of uniform move.  Instead, it happened in fits and starts.  Some lagged behind the curve, continuing to rely on their users for testing.  Others moved testing into sandbox environments and pushed the envelope besides.  They began to automate.

Web development then took another step forward as automation worked its way into the testing strategy.  Sophisticated shops had their QA environments as a check on production releases.  But their developers also began to build automated test suites.  They then used these to guard against regression tests and to ensure proper application behavior.

Eventually, testing matured to a point where it spread out beyond straightforward unit test suites and record-playback-style integration tests.  Organizations got to know the so-called test pyramid.  They built increasingly sophisticated, nuanced test suites.

Web Application Testing Today

Building upon all of this backstory, we’ve seen the rise of the DevOps movement in recent years.  This movement emphasizes automating the entire delivery pipeline, from written code to production functioning.  So stakes for automated testing are higher than ever.  The only way to automate the whole thing is to have bulletproof verification.

This new dynamic shines a light on an oft-ignored element of the testing strategy.  I’m talking specifically about performance testing for your web application.  Automated unit and acceptance testing has long since become a de facto standard.  But now automated performance testing is getting to that point.

Think about it.  We got burned by hand-editing code on the production server.  So we set up sandboxes and tested manually.  Our applications grew too complex for manual testing to handle.  So we built test suites and automated these checks.  We needed production rolls more frequently.  So we automated the deployment process.  Now, we push code efficiently through build, test, and deployment.  But we don’t know how it will behave in the wild.

Web application performance testing fixes that.  If you don’t yet have such a strategy, you need one.  So let’s take a look at the fundamentals for adding this to your testing approach.  And I’ll keep this general enough to apply to your tech stack, whatever it may be.

Read More

By

What Is Performance Testing? An Explanation for Business People

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, take a look at their comprehensive solution for gaining insight into your application’s performance.

The world of enterprise IT neatly divides concerns between two camps: IT and the business.  Technical?  Then you belong in the IT camp.  Otherwise, you belong in the business camp.

Because of this division, an entire series of positions exists to help these groups communicate.  But since I don’t have any business analysts at my disposal to interview, I’ll just bridge the gap myself today.  From the perspective of technical folks, I’ll explain performance testing in language that matters to the business.  And, don’t worry.  I’ve spent enough years doing IT management consulting to have mostly overcome the curse of knowledge.  I can empathize with anyone understanding performance testing only in the vaguest of terms.

A Tale of Vague Woe

To prove it, let me conjure up a maddening hypothetical.  You’ve coordinated with the software organizations for months.  This has included capturing the voice of the customer and working with architects and developers to create a vision for the product.  You’ve seen the project through conception, beta testing and eventual production release.  And you’re pretty proud of the way this has gone.

But now you find yourself more than a little exasperated.  A few scant months after the production launch, weird and embarrassing things continue to go wrong.  It started out as a trickle and then grew into a disturbing, steady stream.  Some users report agonizingly slow experiences while others don’t complain.  Some report seeing weird error messages and screens, while others have a normal experience.  And, of course, when you try to corroborate these accounts, you don’t see them.

You inquire with the software developers and teams, but you can tell they don’t quite take this at face value.  “Hmmm, that shouldn’t happen,” they tell you.  Then they concede that maybe it could, but they shrug and say there’s not much they can do unless you help them reproduce the issue.  Besides, you know users, amirite?  Always reporting false issues because they have unrealistic expectations.

Sometimes you wonder if the developers don’t have the right of it.  But you know you’re not imagining the exasperated phone calls and negative social media interactions.  Worse, paying users are leaving, and fewer new ones sign up.  Whether perception or reality, that hits you in the pocketbook.

Read More

By

Integrating APM into Your Testing Strategy

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, have a look at their tooling to help you with your APM needs.

Does your team have a testing strategy?  In 2017, I have a hard time imagining that wouldn’t at least have some kind of strategy, however rudimentary.  Unlike a couple of decades ago, you hear less and less about people just changing code on the production server and hoping for the best.

At the very least, you probably have a QA group, or at least someone who serves in that role prior to shipping your software.  You write the code, do something to test it, and then ship it once the testers bless it (or at least notate “known issues”).

From there, things probably run the gamut among those of you reading.  Some of you probably do what I’ve described and little more.  Some of you probably have multiple pre-production environments to which a continuous integration setup automatically deploys builds.  Of course, it only deploys those builds assuming all automated unit, integration and smoke tests pass and assuming that your static analysis doesn’t flag any show stopper issues.  Once deployed, a team of highly skilled testers perform exploratory testing.  Or, maybe, you do something somewhere in between.

But, whatever you do, you can always do more.  In fact, I encourage you always to look for new ways to test.  And today I’d like to talk about an idea for just such a thing.  Specifically, I think you can leverage application performance management (APM) software to help your testing efforts.  I say this in spite of the fact that most shops have traditionally taken advantage of these tools only in production.

Read More

By

What is Real User Monitoring?

Editorial note: I originally wrote this post for the Monitis blog.  You can check out the original here, at their site.  While you’re there, have a look around at their assortment of monitoring solutions.

Perhaps you’ve heard the term “real user monitoring” in passing.  Our industry generates no shortage of buzzwords, many of the them vague and context dependent.  So you could be forgiven for scratching your head at this term.

Let’s go through it in some detail, in order to provide clarity.  But to do that, I’m going to walk you through the evolution of circumstance that created a demand for real user monitoring.  You can most easily understand a solution by first understanding the problem that it solves.

A Budding Entrepreneur’s Website

Let’s say that the entrepreneurial bug bites you.  You decide to build some kind of software as a service (SaaS) product.  Obviously, you need some time to build it and make it production ready, so you pick a target go-live date months from now.

But you know enough about marketing to know that you should start building hype now.  So, you put together a WordPress site and start a blog, looking to build a following.  Then, excited to get going, you make a series of post.

And then, nothing.  I mean, you didn’t expect to hit the top of Hacker News, but you expected… something.  No one comments on social media or emails you to congratulate you or anything at all.

Frustrated, you decide to add a commenting plugin and some social media share buttons.  This, you reason, will provide a lower friction means of offering feedback.  And still, absolutely nothing.  Now you begin to wonder if your host provider hasn’t played a cruel trick on you in which it only serves the site when you visit.

The Deafening Lack of Feedback

If perhaps it sounds like I empathize, that’s because I sincerely do.  Years and years ago when I started my first blog, I posted into the ether.  I had no idea if anyone read those early posts.  Of course, I was just having a good time putting my opinions out there and not trying to make a living, so I didn’t worry.  But nevertheless, I eventually felt frustrated.

The frustration arises from a lack of feedback.  You take some actions and then have no ability to see what affect they have.  Sure, you can see the post go live in your browser, but are you reaching anyone?  Has a single person read the post?  Or have thousands read the post and found it boring?  It’s like writing some code, but you’re required to hand it off to someone else to compile, run, and observe.  You feel blind.

Read More