DaedTech

Stories about Software

By

How to Deal with an Insufferable Code Reviewer

Time for another installment of reader question Monday.  Today, I’m going to answer a short and sweet question about how to deal with an “irksome” code reviewer.  It’s both simple, and fairly open ended, making it a good candidate for a blog post.

How do you deal with peer code review and irksome coworkers? Any trick?

Okay, first things first.  This could cover a lot of situations.

Theoretically, someone could be writing extremely problematic code and finding coworkers “irksome” simply for pointing that out.  And, at the complete other end of the spectrum, the code may be fine but a toxic culture creates endless nitpicking.

In environments like this, the code review becomes an adversarial activity, as much about political games as about code quality.

So you must introspect and also think in terms of your goals.  What is it that’s irking you, exactly, and what is the desired outcome?

I’ll frame the post roughly in terms of goals, and my advice for achieving them.  Really, that’s the only way to do it, since code review processes vary so widely from team to team.

A Bit of Perspective on Code Reviews and Code Reviewers

Before going into any advice, let’s consider the subject of code review itself.  There are two conceptual levels at play.

Obvious Level: A Practice with Demonstrable Benefit

A well-conducted code review process helps with code quality and it helps developers learn.  In his book Code Complete, Steve McConnell cited a study that found “formal code inspection” to be the most effective defect prevention method.

This put it ahead of even automated unit tests in the study.

But, beyond that, code review has important human ramifications as well.  People learn and share knowledge through this process.

And they use it to prevent the kind of defects that result in frustration for the team: rework, embarrassment and even a need to come in on weekends.  People with a stake in the codebase get emotionally invested.

Subtle Level: Small Kings in Small Kingdoms

There’s also another, more subtle element at play, however.

I talk extensively about these terms in my book, Developer Hegemony, but you can read a brief primer here.  Software developers (to my endless irritation) are pragmatists (line-level laborers) with almost no actual influence in the corporate world.

When you participate in the so-called “technical track,” you typically never have people reporting to you, earning only developmental titles that include things like “senior,” “tech lead,” and “architect.”

So often, the only simulacrum developers get of actual organizational power comes during interviews and code reviews.  Decent human beings handle this well, looking to mentor and teach.

But there are a lot of less-than-decent human beings out there, who relish these opportunities to wield what little power they have.  The phrase, “a little man with a little power” comes to mind.

An angry code reviewer

Read More

By

Fundamentals of Web Application Performance Testing

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there, take a look at their offering that can help you with your own performance testing.

Software development, as a profession, has evolved in fits and starts over the years.  When I think back a couple of decades, I find myself a little amazed.  During the infancy of the web, hand-coding PHP (or PERL) live on a production machine seemed perfectly fine.

At first blush, that might just seem like sloppiness.  But don’t forget that stakes were much lower at the time.  Messing up a site that displayed song lyrics for a few minutes didn’t matter very much.  Web developers of the time had much less incentive to install pre-production verification processes.  Just make the changes and see if anything breaks.  Anything you don’t catch, your users will.

The Evolution of Web Application Testing

Of course, that attitude couldn’t survive much beyond the early days of dynamic web content.  As soon as e-commerce gained steam in the web development world, the stakes went up.  Amateurs walked the tightrope of production edits while professional shops started to create and test in development or sandbox environments.

As I said initially, this didn’t happen in some kind of uniform move.  Instead, it happened in fits and starts.  Some lagged behind the curve, continuing to rely on their users for testing.  Others moved testing into sandbox environments and pushed the envelope besides.  They began to automate.

Web development then took another step forward as automation worked its way into the testing strategy.  Sophisticated shops had their QA environments as a check on production releases.  But their developers also began to build automated test suites.  They then used these to guard against regression tests and to ensure proper application behavior.

Eventually, testing matured to a point where it spread out beyond straightforward unit test suites and record-playback-style integration tests.  Organizations got to know the so-called test pyramid.  They built increasingly sophisticated, nuanced test suites.

Web Application Testing Today

Building upon all of this backstory, we’ve seen the rise of the DevOps movement in recent years.  This movement emphasizes automating the entire delivery pipeline, from written code to production functioning.  So stakes for automated testing are higher than ever.  The only way to automate the whole thing is to have bulletproof verification.

This new dynamic shines a light on an oft-ignored element of the testing strategy.  I’m talking specifically about performance testing for your web application.  Automated unit and acceptance testing has long since become a de facto standard.  But now automated performance testing is getting to that point.

Think about it.  We got burned by hand-editing code on the production server.  So we set up sandboxes and tested manually.  Our applications grew too complex for manual testing to handle.  So we built test suites and automated these checks.  We needed production rolls more frequently.  So we automated the deployment process.  Now, we push code efficiently through build, test, and deployment.  But we don’t know how it will behave in the wild.

Web application performance testing fixes that.  If you don’t yet have such a strategy, you need one.  So let’s take a look at the fundamentals for adding this to your testing approach.  And I’ll keep this general enough to apply to your tech stack, whatever it may be.

Read More

By

How to Use NDepend’s Trend Charts

Editorial note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and give the trend chart functionality a try.

Imagine a scene for a moment.  A year earlier, a corporate VP spun up a major software project for his organization.  He brought a slew of his organization’s software developers into the project.  But he also needed to add more staff in the form of contractors.

This strained the budget, so he cut a few corners in terms of team member experience.  The VP reasoned that he could make up for this with strategic use of experienced architects up front.  Those architects would prototype good patterns and make it so the less seasoned contractors could just kind of paint by numbers.  The architects spent a few months doing just that and handed the work off to the contractors.

Fast forward to the present.  Now a consultant sits in a nice office, explaining to a beleaguered VP how they got so far behind schedule.  I can picture this scene quite easily because organizations hire me to be this consultant.  I live this scene over and over again.

NDepend Trend Charts

Concepts like technical debt help quite a bit.  I also enlist various other metaphors to help them understand the issues that they face.  But nothing hits home like a visual.  I’ve described this before.  Generate an actual dependency map of their codebase and show it next to the ones the architects created in Visio, and you invariably see a disconnect.

Today, I’d like to take a look at another visual feature of NDepend: trend charts.  These allow you to see a graph-style representation of your codebase’s properties as a function of time.  And you can customize them a great deal.

NDepend trend charts help you visualize your code

In the scene I painted for you a moment ago, the VP—and the people in his program—feel pain for a specific reason.  They go far too long without reconciling the plan with reality.  I come along a year in and generate a diagram that they should have looked at all along.

Trend charts, by design, help combat that problem.  They allow you to get a feel for strategic properties of a codebase.  But they allow you to see how that property varies with time.  You can take advantage of that in some powerful ways.

Read More

By

Software Grows Too Quickly for Manual Review Only

Editorial note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, have a look at CodeIt.Right, which can help you with automated code reviews.

How many development shops do you know that complain about having too much time on their hands?  Man, if only we had more to do.  Then we wouldn’t feel bored between completing the perfect design and shipping to production … said no software shop, ever.  Software proliferates far too quickly for that attitude ever to take root.

This happens in all sorts of ways.  Commonly, the business or the market exerts pressure to ship.  When you fall behind, your competitors step in.  Other times, you have the careers and reputations of managers, directors, and executives on the line.  They’ve promised something to someone and they rely on the team to deliver.  Or perhaps the software developers apply this drive and pressure themselves.  They get into a rhythm and want to deliver new features and capabilities at a frantic pace.

Whatever the exact mechanism, software tends to balloon outward at a breakneck pace.  And then quality scrambles to keep up.

Software Grows via Predictable Mechanisms

While the motivation for growth may remain nebulous, the mechanisms for that growth do not.  Let’s take a look at how a codebase accumulates change.  I’ll order these by pace, if you will.

  • Pure maintenance mode, in SDLC parlance.
  • Feature addition to existing products.
  • Major development initiatives going as planned.
  • Crunches (death marches).
  • Copy/paste programming.
  • Code generation.

Of course, you could offer variants on these themes, and they do not have mutual exclusivity.  But nevertheless, the idea remains.  Loosely speaking, you add code sparingly to legacy codebases in support mode.  And then the pace increases until you get so fast that you literally write programs to write your programs.

The Quality Conundrum

Now, think of this in another way.  As you go through the list above, consider what quality control measures tend to look like.  Specifically, they tend to vary inversely with the speed.

Even in a legacy codebase, fixes tend to involve a good bit of testing for fear of breaking production customers.  We treat things in production carefully.  But during major or greenfield projects, we might let that slip a little, in the throes of productivity.  Don’t worry — we’ll totally do it later.

But during a death march?  Pff.  Forget it.  When you slog along like that, tons of defects in production qualifies as a good problem to have.  Hey, you’re in production!

And it gets even worse with the last two items on my bulleted list.  I’ve observed that the sorts of shops and devs that value copy/paste programming don’t tend to worry a lot about verification and quality.  Does it compile?  Ship it.  And by the time you get to code generation, the problem becomes simply daunting.  You’ll assume that the tool knows what it’s doing and move on to other things.

As we go faster, we tend to spare fewer thoughts for quality.  Usually this happens because of time pressure.  So ironically, when software grows the fastest, we tend to check it the least.

Read More

By

A Look at Some Unit Test Framework Options for .NET

Editorial note: I originally wrote this post for the Stackify blog.  You can check out the original here, at their site.  While you’re there have a look at their offerings, Prefix and Retrace.

If you enjoy the subject of human cognitive biases, you should check out the curse of knowledge.  When dealing with others, we tend to assume they know what we know.  And we do this when no justification for the assumption exists.

Do you fancy a more concrete example?  Take a new job and count how many people bombard you with company jargon and acronyms, knowing full well you just started a few hours ago.  This happens because these folks cannot imagine not knowing these things without expending considerable mental effort.

Why do I lead with this in a post about unit test frameworks?  Well, it seems entirely appropriate to me.  I earn my living as an IT management and strategy consultant, causing me to spend time at many companies helping them improve software development practice.  Because of this, I have occasion to see an awful lot of introductions to unit testing.  And these introductions usually subconsciously assume knowledge of unit testing.

“It’s easy!  Just pick a unit test runner and a coverage tool, and get those setup.  Oh, you’ll also probably want to pick a mocking framework, and here are some helpful Nuget packages.  Anyway, now just write a test.  We’ll start with a calculator class…”

Today, I will do my best to spare you that.  I have some practice with this, since I write a lot, publish courses, and train developers.  So let’s take a look at test frameworks.

What Are Unit Tests?

Thought you’d caught me there, didn’t you?  Don’t worry.  I won’t just assume you know these things.

Let’s start with unit testing in its most basic form, leaving all other subjects aside.  You want to focus on a piece of functionality in your code and test it in isolation.  For example, let’s say that we had the aforementioned Calculator class and that it contained an Add(int, int) method.  Let’s say that you want to write some code to test that method.

public class CalculatorTester
{
    public void TestAdd()
    {
        var calculator = new Calculator();

        if (calculator.Add(2, 2) == 4)
            Console.WriteLine("Success");
        else
            Console.WriteLine("Failure");
    }
}

No magic there.  I just create a test called “CalculatorTester” and then write a method that instantiates and exercises Calculator.Add().  You could write this knowing nothing about unit testing practice at all.  And, if someone had told you to automate the testing of Calculator.Add(), you may have done this exact thing.

Congratulations.  You have written a unit test.   I say this because it focuses on a method and tests it in isolation.

Read More