DaedTech

Stories about Software

By

How Do You Know When to Touch Legacy Code?

Editorial Note: I originally wrote this post for the SmartBear blog.  Head over there and check out the original if you like it.  There’s a lot of good stuff over there worth a look.

Many situations in life seem to create no-win scenarios for you as you innocently go about your business. Here’s one that’s probably familiar to developers for whom pairing or code review is a standard part of the workflow.

You’re tasked with adding a bit of functionality to a long-lived, well-established code base. It’s your hope that new functionality means that you’re going to be writing purely new code, but, as you dig in, you realize you’re not that lucky. You need to open some Death Star of a method and make some edits.

Deathstar

The first thing that occurs to you while you’re in there is that the method is as messy as it is massive. So, you do a bit of cleanup, compacting some code, extracting some methods and generally abiding by the “Boy Scout Rule.” But when you submit for code review, a scandalized and outraged senior developer rushes over to your desk and demands to know if you’re insane. “Do you know how critical that method is!? One wrong move in there and you’ll take down the whole system!”

Yikes! Lesson learned. The next time you find yourself confronted by an ugly juggernaut, you’re careful to be downright surgical, only touching what is absolutely necessary. Of course, this time during code review, the senior developer takes you to task for not going the extra mile. “You know, that’s some pretty nasty code in there. Why didn’t you clean up a little as long as you were already in there?”

Frustrating, huh?

It’s hard to know when to touch existing code. This is especially true when it’s legacy code. Legacy is a term for which you might see any number of definitions. As a TDD practitioner, I personally like Michael Feathers’ definition of legacy code (from his book, Working Effectively with Legacy Code), which is “code without unit tests.”

But for the purposes of this post, let’s go with a more broadly relatable definition: legacy code is code that you’re afraid to touch. I’m sure you can relate to this, even if you’ve never had a senior developer yell at you about how touching it is dangerous (or better yet, leaving an all caps comment in the code threatening anyone who touches it). You size up a giant method with its dozens of locals, loops nested 5 deep, and global variable access galore, and think to yourself, “I have no idea what might happen if I change something here.”

So, what do you do? Should you touch it?

Read More

By

That Code’s Not Dead — It Went To a Farm Upstate… And You’re Paying For It

Editorial Note: I originally wrote this post for the NDepend blog.  Head over there and check out the original, if you’re so inclined.  I encourage you to go give the NDepend blog a read, in general.

When it comes to pets, there’s a heartbreaking lie that parents often tell little children when they believe that those children are not yet ready to wrap their heads around the concept of death.  “Rex went to a nice farm in the countryside where he can run and play with all of the other animals all day!”  In this fantasy, Rex the dog isn’t dead — he lives on in perpetuity.

FarmUpstate

Memoirs of a Dead Method

In the source code of  an application, you can witness a similar lie, but in the other direction.  Code lives on indefinitely, actively participating in the fate of an application, and yet we call it “dead.”  I know this because I’ve lived it.  Let me explain.

You see, I’m a method in a codebase — probably one that would be familiar to you.  My name is GetCustomerById(int id) and I hail from a class called CustomerDaoMySqlImpl that implements the interface ICustomerDao.

I was born into this world during a time of both promise and tumult — a time when the application architects were not sure whether the application would be using SQL Server or MySQL.  To hedge their bets, they mandated data access interfaces and had developers do a bit of prototyping with both tools.  And so I came into this world, my destiny taking a single integer and using MySQL to turn that integer into a customer.

I was well suited to this task.  My code was small, focused, and compact, and I performed ably even to the point of gracefully handling exceptions in the unlikely even that such would occur.  In the early days life was good.  I fetched customers on development machines from unit tests and from application code, and I starred for a time on the staging server.

But then my life was cut tragically short — I was ‘killed.’  The application architects proclaimed that, from this day forward, SQL Server was the database of choice for the team.  Of course, neither my parent class nor any of the methods in it were actually removed from the codebase.  We were left hanging around, “just in case,” but still, we were dead.  CustomerDaoMySqlImpl was instantiated only in the unit test suite and never in the application source code.  We would never shine in staging again, let alone production.  My days of gamely turning integers into customers with the help of a MySQL driver were over.

Read More

By

Writing Tests Doesn’t Have to Be Extra Work

Editorial Note: I originally wrote this post for the Infragistics blog.  Check out the original here, at their site.  If you like this post, please give them some love over there and go check out the original and some of the other posts.

Writing automated tests is sort of like the kale of the software development community.  With the exception of a few outlying “get off my lawn” types, there’s near-universal agreement that it’s “the right thing.”  And that leaves a few different camps of people.

The equivalent of fast food eating carnivores say, “yeah, that’s something that we ought to do, but it’s not for me.”  The equivalent of health-conscious folks plagued by cravings say “we do our best to write tests, but sometimes we just get too busy and we can’t.”  And then, finally, the equivalent of the health nut says, “I write tests all the time and can’t imagine life any other way.”  A lot of people in the first two camps probably don’t believe this statement.  I can understand that, myself, because it’s hard to imagine passing up fried chicken for a kale salad in a universe where calories and cholesterol don’t count.

KaleEater

And yet, I am that ‘health nut’ when it comes to automated testing.  I practice test driven development (TDD) and find that it saves me time and makes me more productive on the projects I work.

It is because of that tendency that I’d like to address a common sentiment that I hear in the industry.  I suspect you’ve heard variants of these statements before.

  • “We started out writing some tests, but we got behind schedule and had to hurry, so we stopped.”
  • “TDD doesn’t make sense in the early phase of this project because we’re just prototyping.”
  • “We didn’t want to write any tests because there’s a chance that this code might get thrown out and we’d just be writing twice as much code.”

The common thread here is the idea that writing the automated tests is, when the rubber meets the road, a bonus.  In the world of software development, the core activity is writing the executable source code and deploying it, and anything else is, strictly speaking, expendable.  You don’t truly need to write documentation, generate automated tests, go through QA, update the project milestones in a Gantt chart, etc.  All of these things are kale in the world of food — you just need to eat any kind of food, but you’ll eat kale if you’re in the mood and have time to go to the grocery store, and… etc.

Read More

By

The Most Important Code Metrics You’ve Never Heard Of

Editorial Note: I originally wrote this post for the NDepend blog.  Head on over and check out the original.  If software architecture interests you or you aspire to that title, there’s a pretty focused set of topics that will interest you.

Oh how I hope you don’t measure developer productivity by lines of code.  As Bill Gates once ably put it, “measuring software productivity by lines of code is like measuring progress on an airplane by how much it weighs.”  No doubt, you have other, better reasoned metrics that you capture for visible progress and quality barometers.  Automated test coverage is popular (though be careful with that one).  Counts of defects or trends in defect reduction are another one.  And of course, in our modern, agile world, sprint velocity is ubiquitous.

FigherJet

But today, I’d like to venture off the beaten path a bit and take you through some metrics that might be unfamiliar to you, particularly if you’re no longer technical (or weren’t ever).  But don’t leave if that describes you — I’ll help you understand the significance of these metrics, even if you won’t necessarily understand all of the nitty-gritty details.

Perhaps the most significant factor here is that the metrics I’ll go through can be tied, relatively easily, to stakeholder value in projects.  In other words, I won’t just tell you the significance of the metrics in terms of what they say about the code.  I’ll also describe what they mean for people invested in the project’s outcome.

Read More

By

Be Careful with Software Metaphors

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original post here, at their site.  Head on over and check out that post and others as well.

Over the years, there have been any number of popular metaphors that help people radically misunderstand the realities of software development.  Probably the most famous and persistent one is the idea that making software is similar to building a skyscraper (or to building construction in general).

This led us, as an industry, to approach software by starting with a knowledge worker “architect” who would draw grand schematics to plot every last detail of the software construction.  This was done so that the manual laborers (junior developers) tasked with actual construction could just do repetitive tasks by rote, deferring to a foreman (team lead) should the need for serious thinking arise.  It was important to lay a good foundation with database and framework selection, because once you started there could be no turning back.  Ever.  Should even minor plan changes arise during the course of the project, that would mean a change request, delaying delivery by months.

Software is just like construction, provided you’re terrible at building software.

This metaphor is so prevalent that it transcended conscious thought and crept its way into our subconscious, as evidenced by the “architect” title.  Given the prevalence of agile (or at least iterative) software development, I think you’d be hard pressed to find people that still thought software construction was a great model for building software.  I don’t think you see a lot of thinly sliced buildings, starting with an operational kitchen only and building out from there.

But there are other, more subtle, parallels that pervade the industry and lead to misunderstandings between “the business,” managers, and software developers.

TimeLearning

Read More