DaedTech

Stories about Software

By

The Value of Failure

Over the course of time leading people and teams, I’ve learned various lessons. I’ve learned that leading by example is more powerful than leading by other attempts at motivation. I’ve learned that trust is important and that deferring to the expertise of others goes a lot further than pretending that you’re some kind of all-knowing guru. I’ve learned that listening to people and valuing their contributions is vital to keeping morale up, which, in turn, is vital to success. But probably the most important thing that I’ve learned is that you have to let people fail.

My reasoning here isn’t the standard “you learn a lot by failing” notion that you probably hear a lot. In fact, I’m not really sure that I buy this. I think you tend to learn better by doing things correctly and having them “click” than by learning what not to do. After all, there is an infinite number of ways to screw something up, whereas precious few paths lead to success. The real benefit of failure is that you often discover that your misguided attempt to solve one problem solves another problem or that your digression into a blind alley exposes you to new things you wouldn’t otherwise have seen.

If you run a team and penalize failure, the team will optimize for caution. They’ll learn to double and triple check their work, not because the situation calls for it but because you, as a leader, make them paranoid. If you’re performing a high risk deployment of some kind, then double and triple checking is certainly necessary, but in most situations, this level of paranoia is counter-productive in the same way it is to indulge an OCD tendency to check three times to see if you locked your front door. You don’t want your team paralyzed this way.

A paranoid team is a team with low morale and often a stifled sense of enjoying what it does. Programming ceases to be an opportunity to explore ideas and solve brain teasers and becomes a high-pressure gauntlet instead. Productivity decreases predictably because of second-guessing and pointless double checking of work, but it’s also adversely affected by the lack of cross-pollination of ideas resulting from the aforementioned blind alleys and misses. Developers in a high pressure shop don’t tend to be the ones happily solving problems in the shower, stumbling across interesting new techniques and having unexpected eureka moments. And those types of things are invaluable in a profession underpinned by creativity.

So let your team fail. Let them flail at things and miss and figure them out. Let them check in some bad code and live with the consequences during a sprint. Heck, let it go to production for a while, as long as it’s just technical debt and not a detriment to the customer. Set up walled gardens in which they can fail and be relatively shielded from adverse consequences but are forced to live with their decisions and be the ones to correct them. It’s easy to harp on about the evils of code duplication, but learning how enormously tedious it is to track down a bug pasted in 45 different places in your code base provides the experience that code reuse reduces pain. Out of the blind alley of writing WET code, developers discover the value of DRY.

The walled garden aspect is important. If you just let them do anything at all, that’s called chaos, and you’re setting them up to fail. You have to provide some ground rules that stave off disaster and then within those boundaries you have to discipline yourself to keep your hands off the petri dish in order to see what grows. It may involve some short term ickiness and it might be difficult to do, but the rewards in the end are worth it — a happy, productive, and self-sufficient team.

By

Kill Tech Patents with Fire And Do It Now

I’ve actually had a few spare hours lately to get ahead on blogging, so I was just planning to push a post for tomorrow, read a little and go to sleep. But then I saw an article that made me get a fresh cup of water, turn my office lamp on, and start writing this post that I’m going to push out instead. There probably won’t be any editing or illustration by the time you read this, and it might be a little rant-ish, so be forewarned.

Tonight, I read through this article on Ars Technica with the headline “Patent War Goes Nuclear.” I think the worst part about reading this for me was that my reaction wasn’t outrage, worry, disgust or really much of anything except, “yep, that makes sense.” But I’ll get back to my reaction in a bit. Let me digress here for a moment to talk about irony.

Irony is a subject about which there is so much debate that the definition has been fractured and categorized into more buckets of meaning than I can even count off the top of my head. There is literary irony, dramatic irony, verbal irony and probably more. There are various categories of era-realated irony, such as Classical (Greek) irony, Romantic irony, and, most recently, whatever hipsters are and whatever they do. With all of these different kinds of ironies, the only thing that the world can seem to agree on is that things in the Alanis Morissette song about “ray-e-ay-ain on your wedding day” are not actually ironic.

The problem for poor Alanis, now the object of absurd degrees of international nitpicking derision, is that there is no ultimate reversal of expectation in all of the various ‘ironic’ things that happen in her song. Things are generally considered to be ironic when there is a gap between stated expectations or purpose and outcome. When it rains on your wedding day, that just sucks — it’s not ironic. It rains a good number of days of the year, so no reasonable person would expect that it couldn’t rain on a given day. What would most likely be considered ironic is if you opted to have your wedding inside to avoid the possibility of getting wet, and a large supply line pipe burst in the floor above you during the wedding, drenching everyone in attendance.

Another pretty clear cut example of irony is the US Patent System as it exists today when compared with common perception as to the original and ongoing purpose of such an institution. There’s a rather fascinating and somewhat compelling argument that claims the concept of intellectual property (and specifically patents) were instrumental in creating the Industrial Revolution. In other words, there was historically little motivation for serf and merchant classes to innovate and optimize their work since the upper classes with the means of production would simply have stolen the ideas and leveraged better economies of scale and resources to reap the benefits for themselves. But along came patents and the “democratization of invention” to put a stop to all that and to enable the Horatio Algiers (or perhaps Thomas Edisons) of the world to have a good idea, march on down to the patent office, and make sure that they would be treated fairly when it came to reaping the material benefits of their own ideas.

On the other side of the coin, I’ve read arguments that offer refutations of this working hypothesis, and I’m not endorsing one side or the other, because it really doesn’t matter for my purposes here. Whether the “democratization of invention” was truly the catalyst for our modern technological age or not, the perception remains that the patent system exists to ensure that the little guy is protected and that barriers to entry are removed to create truly free markets that reward innovation. If you have the next great idea, you go find a lawyer to help you draft a patent and that’s how you make sure you’re protected from unfair treatment at the hands of evil corporate profiteers.

So where’s the irony? I’ll get to that in a minute, but first another brief digression. I want to talk now about the concept of a “defensive patent,” at least as I’ve experienced the concept. Many moons ago, I maintained a database application to manage intellectual property for a company that made manufacturing equipment. At this company, there was a fairly standard approach to patenting, which was “mention everything you’re working on to the Intellectual Property team who will see if perhaps there’s anything we can claim patents on — and we mean everything.” The next logical question was “what if it’s already obvious or unrelated to what we’re trying to do,” to which the response of “what part of everything wasn’t clear?” The reason for this was that the goal wasn’t to patent things so that the company could make sure that nobody took its ideas but rather to build up a war-chest of stockpiled patents. A patent on something not intended for use was perfectly fine because you could trade with a competitor that was trying to use a patent to extort you. Perhaps you could buy and sell these things like securities packages in a portfolio. And, to be perfectly honest, my company was pretty reputable and honest. They were just trying to avoid getting burned — don’t hate the player, hate the game. “Defensive” patents had nothing to do with protecting innovation and everything to do with leverage for endless series of lawyer-enriching, negative sum games played out in court.

As I said, that was some years ago, and in the time that’s elapsed since, this paradigm seems to have progressed to the logical conclusion that I pictured back then (or perhaps I just wasn’t aware of it as much back then). Patents had started as legal protection, evolved to become commodities and have now reached the point of being corporate currency, devoid of any intrinsic meaning or value. In the article that I cited, a major tech company (Nortel) went bankrupt and its competitors swooped in like buzzards to loot its corpse. For those of you who played the Diablo series of games, this reminds me of when a dead player would “pop” and everyone else in the game would scramble to pillage his equipment. Or perhaps a better metaphor would be that a nuclear power had fallen into civil war and revolution and neighboring countries quietly stepped in to spirit away its massive arms stockpile, each trying to grab up as much as possible for fear that their neighbors were doing the same and getting ready to use it against them.

Microsoft, Apple, and some other players stepped in to form a shell company and bid against Google for this cache of patents, and Google wound up losing all the marbles to this cartel. Now, fast forward a few years and the cartel has begun shelling Google. How does all of this work exactly? It works because of the evolution of the patent that I mentioned. The patents are protecting nothing because that isn’t what they do, and they have no value as commodities because they’re packaged up into patent “mutual funds” (arsenals) that only matter in large quantities. You don’t get patents in our world to protect something you did, and you don’t get them because they have some kind of natural value the way an ear of corn does — you get them for the sole purpose of amassing them as a means to an end. And, as with any currency, the entities that have the easiest time acquiring more are the ones that already have the most.

So, there is the fundamental irony of the patent system. It’s a system that we conceive of existing to protect the quirky genius in his or her workshop at home from some big, soulless corporation, but it’s a system that in practice makes it easier for the big, soulless corporation to smash the quirky geniuses like bugs or, at best, buy them out and use them as cannon fodder against competitors. The irony lies in the fact that a system we take to be protecting our most valuable asset — our ability to innovate — is actually killing it. The patent system erects massive barrier to entry, rewards unethical behavior, creates a huge drain on the economy and makes bureaucratic process and influence peddling table stakes for success at delivering technological products and services. This is why I had little reaction to a shell company suing Google in a looming patent Armageddon — it just seems like the inevitable outcome of this broken system.

I doubt you’ll find many people that would dispute the notion that our intellectual property system needs serious overhaul. If you google “patent troll” and flip over to news, you’ll find plenty of articles and op-eds in the last month or even the last week. The fact that abuse of the system is so rampant that there’s an endless news cycle about it tells you that there are serious problems. But I think many would prefer to solve these problems by modifying the system we have now until it works. I’m not one of them. I think we’d be better served to completely toss out the system we have now and start over, at least for tech patents (I can see a reasonable case for patents in the field of medicine, for instance). I don’t think it can be salvaged, and I think that I’d answer the question “are you crazy — wouldn’t that result in chaos and anarchy?” with the simple opinion, “it can’t possibly be worse than what we have now.”

In the end, I may be proved wrong, particularly since I doubt torching the tech IP system is what’s going to happen. I hope that I am and I hope that efforts to shut down the trolls and eliminate situations where only IP lawyers win are successful, but until I see it, I’ll remain very skeptical.

/end rant

Back to regularly scheduled techie posts next week. 🙂

By

Professional Code

About a year ago, I read this post in my feed reader and created a draft with a link to it and a little note to myself that said, “interesting subject.” Over the past weekend, I was going through old drafts that I’d never gotten around to finishing and looking to remedy the situation when I came across this one and decided to address it.

To be perfectly honest, I had no idea what I was going to write about a year ago. I can’t really even speculate. But I can talk a bit about what I think of now as professional code. Like Ayende and Trystan, I don’t think it’s a matter of following certain specific and abiding principles like SOLID as much as it is something else. They talk about professional code in terms of how quickly the code can be understood by maintainers since a professional should be able to understand what’s going on with the code and respond to the need to change. I like this assessment but generally categorize professionalism in code slightly differently. I think of it as the degree to which things that are rational for users to want or expect can be done easily.

To illustrate, I’ll start with a counter-example, lifted from my past and obfuscated a bit. A handful of people had written an application that centered around modifications to an XML file. The XML file and the business rules governing its contents were fairly complex, so it wasn’t a trivial application. The authors of this app had opted to prevent concurrent edits and race conditions by implementing an abstraction wherein the file was represented by a singleton class. Predictably, the design heavily depended on XmlFile.Instance.CallSomeMethod() style invocations.

One day, someone in the company expressed that it’d be a nice value-add to allow this application to show differences between incarnations of this XML file — a diff changes, if you will. When this idea was presented to the lead/architect of this code base, he scoffed and actually became sort of angry. Evidently, this was a crazy request. Why would ever want to do that? Inconceivable! And naturally, this was completely unfeasible without a rewrite of the application, and good luck getting that through.

If you’re looking for a nice ending to this story, you’re barking up the wrong tree. The person asking for this was humbled when it should have been the person with the inflexible design that was humbled. As a neutral observer, I was amazed at this exchange — but then again, I knew what the code looked like. The requester went away feeling dumb because the scoffer had a lot of organizational clout, so it was assumed that scoffing was appropriate. But I knew better.

What had really happened was that a questionable design decision (representing an XML file as a singleton instance) became calcified as a cornerstone assumption of the application. Then along came a user with a perfectly reasonable request, and the request was rebuffed because the system, as designed, simply couldn’t handle it. I think of this as equivalent to you calling up the contractor that built your house and asking him if he’d be able to paint your living room, and having him respond, “not the way I built your house.”

And that, to me, is unprofessional code. And, I don’t mean it in the sense that you often hear it when people are talking about childish or inappropriate behavior — I mean that it actually seems like amateur hour. The more frequently you tell your users that things that seem easy are actually really difficult, the less professional your code is going to seem. The reasoning is the same as with the example of a contractor that somehow built your house so that the walls couldn’t be painted. It represents a failure to understand and anticipate the way the systems you design tend to evolve and change in the wild, which is indicative of a lack of relevant professional experience. Would a seasoned, professional contractor fail to realize that most people want to paint the rooms in their houses sooner or later? Would a seasoned, professional software developer fail to realize that someone might want multiple instances of a file type?

Don’t get me wrong. I’m not saying that you’re a hack if there’s something that a user dreams up and thinks will be easy that you can’t do. There are certainly cases where something that seems easy won’t be easy, and it doesn’t mean that your design is bad or unprofessional. I’m talking about what I perceive to be a general, overarching trend. If changes to the software seem like they should be easy, then they probably should be easy. If you’ve added 20 different customer types to your system, it’d be weird if adding a 21st was extremely hard. If you currently support storing data in a database or to a file, it’d be weird if there was a particular record type that you couldn’t put in a file. If you have some concept of security and roles in your system, it’d be weird if adding a user required a re-deployment of your software.

According to the Clean Code videos by Bob Martin, a defining characteristic of good architecture is that it allows decisions to be deferred as long as possible. If the architecture is well designed, for instance, you should be able to write a lot of the code without knowing if it’s going to be a web app or desktop app or without knowing whether you’d use MySQL or PostgreSQL or MongoDB. I’d carry this a bit further and say that being able to anticipate what users might want and what they might change their minds about and then designing accordingly is the calling card of a writer of professional code.

By

Intro to Unit Testing 8: Test Suite Management and Build Integration

It’s been over a month now since my last post in this series, and for that I sort of apologize. I think I’ve been channelling all of my instructive energy into my now-finished Pluralsight course, leaving the blog largely for opinions, screeds, and a random hiring announcement. So, let’s get back on track and wrap this thing up. I have this post and another one slated and then we can call it a day.

So far, I’ve talked quite a lot about how and when (and when not) to write unit tests. I’ve offered up some techniques for helping you isolate the classes that you want to test, including the use of test doubles. And finally, I offered some advice on how to get people to leave you alone and let you write tests. So now I’d like to turn and offer some advice beyond just writing the things. You need to live with them, manage them and leverage them over the course of time.

Managing the Suite

You’ve built them. So, now what? At some point, you’ll wonder exactly when you’re getting started. For the first few or even few dozen classes you test, you’ll alternate between some exasperation at spending extra time doing something new and satisfaction at, well, doing something new. But then, at some point, you’ll be sitting around and notice that your test suite has like 400 tests and think, “wow, that’s a lot of code… do I really want all this?”

That feeling will hit you even harder when you go to change something under a tight deadline and your real quick change makes a test go red. You’re pretty sure the test is broken because it was testing the old way of doing things, so you really just want to comment out the test and you wonder why it’s such a pain to change the code. Why do you have to waste so much time to change one line of code?

The answer to these questions lies in practice but also effective test suite management. If you let the unit test suite become a boat anchor, it will drag you down. Your frustration will be real and reasonable, rather than just a temporary product of you being in a hurry and unfamiliar with working in a code base under test. You need to take care to prevent this from happening, and I’m going to tell you how in this section.

Name Your Tests Clearly and Be Wordy

When you’re writing a unit test, you’re looking at code. But when you’re running your test suite, you aren’t most of the time, and when you’re trying to understand why a run or a build failed, you’re never looking at code. When the test suite is failing, you don’t want to waste time figuring out why. And having to open the IDE, navigate to the test, read the code and figure out the problem is a waste of time.

Don’t give your test methods names like “Test24” or “CustomerTest” or something. Instead, give them names like “Customer_IsValid_Returns_False_When_Customer_SocialSecurityNumber_Is_Empty”. That method name may seem ridiculous, especially if you’re used to giving methods short names, but trust me, you’ll be thankful for it. When your build is failing, which of these method names would you rather see an X next to? Would you rather be saying “looks like test 24 is failing,” or would you rather be saying, “oh, I wonder why someone made it so that an empty SSN is now considered valid?” If you say the first one, you’re lying.

This may seem unimportant in the scheme of things, but it’s the difference between associating frustration and confusion with your test suite and viewing it as a warning system for potentially undesirable changes. The test suite needs to be communicating clearly to you what’s wrong. Descriptive test names help do that and they help you identify whether it’s your code or the test itself that needs to be changed in the face of changing requirements.

Make Your Test Suite Fast

Ruthlessly delete and cull out slow tests. I can’t say it more plainly than that. A good test suite runs in seconds, max. If yours starts to take minutes, or God forbid, hours, then it’s rotting and becoming useless to you. Think of it this way — if it takes several minutes to run the test suite, how often are you going to do it? Every time you make a change, or just when you check in? If it takes hours, will you ever run it voluntarily?

If your test suite takes a long time to run, nobody will run it. Short feedback loops are of paramount importance to developers, and we optimize for efficiency. If the unit test suite is inefficient, we’ll find other ways to get feedback. As such, it is incredibly important to ensure that your test suite always runs quickly. Treat it as if the rest of your team were waiting for any legitimate excuse not to use the test suite, and don’t let inefficiency be that excuse.

Test Code is First Class Code

A common mistake that I see among those relatively new to testing is test code that’s something of a mess. The code will be brittle, heavily duplicated, weird, and hard to read. In short, your tests and test classes will contain code that you wouldn’t be caught dead putting into production.

Don’t do that. Treat your test code as if it were any other code. Eliminate duplication. Factor common functionality out into methods. Be descriptive with naming and with the flow of the method. Keep that code clean. I get that there’s a desire when it comes to testing to make as much of a mess as possible in the “bug bash” sense of throwing chaos at the situation and proving that your code can handle it, but the chaos needs to be controlled, and you can control it by keeping your test code clean and maintainable. If the tests are clean and easy to maintain, people won’t mind going in periodically to make an adjustment. If they’re unruly, people will get annoyed and comment them out or stop running them.

Have a Single Assertion per Test

This is a subtle one, but it also goes toward maintainability. If you start writing tests that have 20 asserts in them, you may feel good that you’re exercising a whole section of the code, but really you’re making things hard for yourself later. If all 20 tests pass (or at least the first 19), then all will be executed. But if the first one fails, none of the rest get executed. This means that in test methods with lots of asserts, it’s not always clear where they’re failing, which means it’s not always clear what’s going wrong.

In order for your test suite to be an asset, it has to be a clear indicator of what’s going wrong. Which would you find more useful in your car: a series of many different lights with helpful diagrams that lit up to indicate a problem, or one unlabeled red light that came on whenever anything at all was wrong? If you had that latter light and it could mean anything from your gas being low to you being out of wiper fluid to imminent destruction of your transmission, I bet you’d just start ignoring it after a while.

Don’t Share State Between Your Tests

There is no more surefire way to drive yourself insane at some future date than by storing some kind of application state among unit tests being executed. What I mean is if you have some test A that declares sets a global counter variable to 1, and then you have another test B that depends on the global counter being set to 1 in order for it to succeed, you are in for a world of hurt.

The problem is that there is no guarantee that the unit test runner will execute the tests in any particular order. What’s likely to happen is that your tests get executed in a particular order whenever you run them on your machine, so everything goes fine. But when the build machine runs them they fail. Weird. So you check them on your friend Bob’s machine, and they pass there. But on Alice’s machine, they fail. If you didn’t already know why this was happening because I just told you, can you imagine how much of your hair you’d pull out? You’d probably be checking the IDE version on those machines, compiler information, OS settings, and God only knows what else. It’d be a wild goose chase.

And imagine if it worked on everyone’s machine initially and then six months later started failing occasionally on the build machine. Machine isn’t the only failing dimension — there’s also time. So please, whatever you do, do not have your unit tests depend on the execution of a previous test. This practice, more than any other, is likely to lead to a rage-quitting of unit testing as a practice where you simply take all of them out of the build.

Encourage Others and Keep Them Invested

This sounds like a strange one to round out the section, but it’s important. If you’re the only one fighting the good fight with unit tests, it becomes daunting and exasperating. Everyone else’s reaction to failing tests is annoyance and they’re waiting for excuses just to stop altogether. You wind up feeling that you’re in an adversarial relationship with the team (I speak from experience here). But if you get others to buy in, you’re not shouldering the burden alone and you have help keeping the suite healthy and helpful.

Build Integration

When you first start out unit testing, the tests will be sort of disorganized and haphazard. You’ll write a few to get the hang of it and then maybe discard them. After a bit of that, you’ll start checking them into your solution (unless you’re an incorrigible weirdo or a liar). You do that, and the suite grows and, ideally, everyone is running it locally to keep things clean and be notified of potential breaking changes.

But you have to take it beyond that at some point if you want to realize the full value of the unit tests. They can’t just be a thing everyone remembers to do locally on pain of nagging emails or because someone will buy the team donuts or some other peer-pressure-oriented demerit system. Failing unit tests have to have real (read: automated) consequences. And the best way to do this is to make it so that failing unit tests mean a failing build.

If you’re in a shop that’s not as formal, this may be difficult at first. One handicap may be that you’re reading this and saying “what do you mean by ‘the build?'” If what you do is write code and take some kind of executable out of your project’s output directory on your machine and push it to a server or to your users, you’ve got some work to do before you think about integrating unit tests. You need a build. A build is an automated process by which your source code is turned into a production-ready, deployable package. And it’s automated in the sense that it doesn’t involve you hitting Ctrl-Shift-B or Ctrl-F6 or whatever you do manually in your IDE to build. The Build, with a capital B, is a process that checks your code out of source control, builds it, runs checks and whatever else is necessary, perhaps increments the versioning of the executables, etc., and then spits out the final product that will be pushed to a server or burned onto a DVD or whatever. If you want to read more about build tools, you can google around about TeamCity, CruiseControl, TFS, FinalBuilder, Jenkins etc. And you don’t have to use a product like that — you can create your own using shell scripts or code if you choose.

Because of all the different options when it comes to programming languages, unit test technologies and build tools, I’m not going to offer a tutorial on how to integrate unit tests into your build. To be comprehensive, I’d need to give dozens of such tutorials. But what I will say is that your integration is going to take the same basic format no matter what tools you’re using. The build is a series of steps that passes if everything goes smoothly and the deliverables are ultimately generated. If a step in the build fails, then the build itself fails. What you need to do is add a step that involves running the unit tests. With this in place, you’re creating a situation where any failing unit test means that the entire build fails.

Conceptually, this is pretty straightforward. Unit test runners can be run in command line fashion and they’ll generate a return value of some kind. So the build tool needs to examine the test runner’s output for an error code. If it finds one, it puts the brakes on the whole operation.

It may seem extreme at first to torpedo the whole build because of a failing unit test, but when you think about it, what else should possibly happen? Why would you want a process that allowed you to ship code knowing that it was defective in a way that it didn’t used to be? That’s amateur hour. And, what’s more is that if your team starts understanding that failed unit tests mean a failed build they’ll be sure to run the tests before check-in so that they don’t fail. It will become a natural part of your process, and the quality of your software will be dramatically improved for it.

By

Static Analysis, NDepend, and a Pluralsight Course

I absolutely love statistics. Not statistics as in the school subject — I don’t particularly love that branch of mathematics with its binomial distributions and standard deviations and whatnot. I once remarked to a friend in college that statistics-the-subject seemed like the ‘science’ of taking a guess and then rigorously figuring out how wrong you were. Flippant as that assessment may have been, statistics-the subject has hardly the elegant smoothness of calculus or the relentlessly logical pursuit of discrete math. Not that it isn’t interesting at all — to a math geek like me, it’s all good — but it just isn’t really tops on my list.

But what is fascinating to me is tabulating outcomes and gamification. I love watching various sporting events on television and keep track of odd things. When watching a basketball game, I always the amount of a “run” the teams are on before the announcers think to say something like “Chicago is on a 15-4 run over the last 6:33 this quarter.” I could have told you that. In football, if the quarterback is approaching a fist half passing record, I’m calculating the tally mentally after every play and keeping track. Heck, I regularly watch poker on television not because of the scintillating personalities at the tables but because I just like seeing what cards come out, what hands win, and whether the game is statistically normal or aberrant. This extends all the way back to my childhood when things like my standardized test scores and my class rank were dramatically altered by me learning that someone was keeping score and ranking them.

I’m not sure what it is that drives this personality quirk of mine, but you can imagine what happened some years back when I discovered static analysis and then NDepend. I was hooked. Before I understood what the Henderson Sellers Lack of Cohesion in Methods score was, I knew that I wanted mine to be lower than other people’s. For those of you not familiar, static analysis is a way to examine your code without actually executing it and seeing what happens retroactively. Static analysis, (over) simplified, is an activity that examines your source code and makes educated guesses about how it will behave at runtime and beyond (i.e. maintenance). NDepend is a tool that performs static analysis at a level and with an amount of detail that makes it, in my opinion, the best game in town.

After overcoming an initial pointless gamification impulse, I learned to harness it instead. I read up on every metric under the sun and started to understand what high and low scores correlated with in code bases. In other words, I studied properties of good code bases and bad code bases, as described by these metrics, and started to rely on my own extreme gamification tendencies in order to drive my work toward better code. It wasn’t just a matter of getting in the habit of limiting my methods to the absolute minimum in size or really thinking through the coupling in my code base. I started to learn when optimizing to improve one metric led to a decline in another — I learned lessons about design tradeoffs.

It was this behavior of seeking to prove myself via objective metrics that got me started, but it was the ability to ask and answer lots of questions about my code base that kept me coming back. I think that this is the real difference maker when it comes NDepend, at least for me. I can ask questions, and then I can visualize, chart and track the answer in just about every conceivable way. I have a “Moneyball” approach to code, and NDepend is like my version of the Jonah Hill character in that movie.

Because of my high opinion of this tool and its importance in the lives of developers, I made a Pluralsight course about it. If you have a subscription and have any interest in this subject at all, I invite you to check it out. If you’re not familiar with the subject, I’d say that if your interest in programming breaks toward architecture — if you’re an architect or an aspiring architect — you should also check it out. Static analysis will give you a huge leg up on your competition for architect roles, and my course will provide an introduction for getting started. If you don’t have a Pluralsight subscription, I highly recommend trying one out and/or getting one. This isn’t just a plug for me to sell a course I’ve made, either. I was a Pluralsight subscriber and fan before I ever became an author.

If you get a chance to check it out, I hope you enjoy.