DaedTech

Stories about Software

By

Let’s Make Better Code Metrics

A few months back, I wrote a post about changes to my site and work.  Today, I have another announcement in that same vein:  I’ve recently partnered with NDepend to help start and create content for their blog.  If you go there now, you can see the maiden post which announces the release of the newest version of NDepend (not written by me, personally, if you were wondering, though some of mine will follow).

What is NDepend?

In the broadest terms, NDepend is a static analysis tool.  More specifically and colloquially, you might think of NDepend as Jiminy Cricket, if Pinocchio were a software developer or architect.  It’s extremely helpful for visualizing the dependencies and properties of your code base (e.g. complexity, coupling, etc), which will give you a leg up on your fellow developers right out of the gate.  It’s also incredibly informative, furnishing you not only with detailed, quantitative metrics about your code, but also indicating where you’re deviating from what is broadly considered to be good programming technique.  And, of course, you can do a great deal of customization, from integrating this feedback into your build to tracking code quality over time to building and defining your own complex, custom rules.

To put it more succinctly, in a world where developers are trying to distinguish themselves in terms of knowledge and chops, NDepend will give you such a huge advantage that it’s probably unfair to everyone that doesn’t have it.  I personally learned a ton about software architecture just from installing, using, and exploring this tool over the course of 5 years or so.  If you want to learn more about NDepend and static analysis in general, check out my Pluralsight course about it that I published in conjunction with the last major version. (If you don’t have a Pluralsight subscription but want to check it out, sign up for my mailing list using the form at the right).

Scientist Read More

By

Fun with ILDASM: Extension vs Static Methods

I recently had a comment on a very old blog post that led to a conversation between the commenter and me. During the conversation, I made the comment that extension methods in C# are just syntactic sugar over plain ol’ static methods. He asked me to do a post exploring this in further detail, which is what I’m doing today. However, there isn’t a ton more to explain in this regard anymore than explaining that (2 + 2) and 4 are representations of the same concept. So, I thought I’d “show, don’t tell” and, in doing so, introduce you to the useful concept of using a disassembler to allow you to examine .NET’s IL byte code.

I won’t go into a ton of detail about virtual machines and just in time compilation or anything, but I will describe just enough for the purpose of this post. When you write C# code and perform a build, the compiler turns this into what’s called “IL” (intermediate language). Java works in the same way, and its intermediate product is generally referred to as byte code. When an IL executable is executed, the .NET framework is running, and this intermediate code is compiled, on the fly, into machine code. So what you have with both .NET in Java is a 2 stage compilation process: source code to intermediate code, and intermediate code to machine code.

Where things get interesting for our purposes is that the disassembled intermediate code is legible and you can work with it, reading it or even writing it directly (which is how AOP IL-Weaving tools like Postsharp work their magic). What gets even more interesting is that all of the .NET languages (C#, VB, F#, etc) compile to the same IL code and are treated the same by the framework when compiled into machine code. This is what is meant by languages “targeting the .NET framework” — there is a compiler that resolves these languages into .NET IL. If you’ve ever wondered why it’s possible to have things like “Iron Python” that can exist in the .NET ecosystem, this is why. Someone has written code that will parse Python source code and generate .NET IL (you’ll also see this idea referred to as “Common Language Infrastructure” or CLI).

Anyway, what better way to look at the differences or lack thereof between static methods and extension methods. Let’s write them and see what the IL looks like! But, in order to do that, we need to do a little prep work first. We’re going to need easy access to a tool that can read .NET exe and dll files and produce the assembly in a readable, text-file form. So, here’s what we’re going to do.

  1. In Visual Studio, go to Tools->External Tools.
  2. Click “Add” and you will be prompted to fill out the text boxes below.
  3. Fill them out as shown here (you may have to search for ILDASM.exe, but it should be in Microsoft SDKs under Program Files (x86):ILDASM
  4. Click “Apply.”  ILDASM will now appear as a menu option in the Tools menu.

Now, let’s get to work.  I’m going to create a new project that’s as simple as can be. It’s a class library with one class called “Adder.” Here’s the code:

public static class Adder
{
    public static int AddTwoNumbers(int first, int second)
    {
        return first + second;
    }
}

Let no one accuse me of code bloat! That’s it. That’s the only class/method in the solution. So, let’s run ILDASM on it and see what happens. To do that, select “ILDASM” from the Tools menu, and it will launch a window with nothing in it. Go to “File->Open” (or Ctrl-O) and it will launch you in your project’s output directory. (This is why I had you add “$(TargetDir)” in the external tools window. Click the DLL, and you’ll be treated to a hierarchical makeup of your assembly, as shown here:

ILDASMPopulated

So, let’s see what the method looks like in IL Code (just double click it):

.method public hidebysig static int32  AddTwoNumbers(int32 first, int32 second) cil managed
{
  // Code size       9 (0x9)
  .maxstack  2
  .locals init ([0] int32 CS$1$0000)
  IL_0000:  nop
  IL_0001:  ldarg.0
  IL_0002:  ldarg.1
  IL_0003:  add
  IL_0004:  stloc.0
  IL_0005:  br.s       IL_0007
  IL_0007:  ldloc.0
  IL_0008:  ret
} // end of method Adder::AddTwoNumbers

Alright… thinking back to my days using assembly language, this looks vaguely familiar. Load the arguments into registers or something, add them, you get the gist.

So, let’s see what happens when we change the source code to use an extension method. Here’s the new code:

public static class Adder
{
    public static int AddTwoNumbers(this int first, int second)
    {
        return first + second;
    }
}

Note, the only difference is the addition of “this” before “int first.” That is what turns this into an extension method and alters the calling semantics (though you can still call extension methods the same way you would normal static methods).

So, let’s see what the IL code looks like for that:

.method public hidebysig static int32  AddTwoNumbers(int32 first, int32 second) cil managed
{
  .custom instance void [mscorlib]System.Runtime.CompilerServices.ExtensionAttribute::.ctor() = ( 01 00 00 00 ) 
  // Code size       9 (0x9)
  .maxstack  2
  .locals init ([0] int32 CS$1$0000)
  IL_0000:  nop
  IL_0001:  ldarg.0
  IL_0002:  ldarg.1
  IL_0003:  add
  IL_0004:  stloc.0
  IL_0005:  br.s       IL_0007
  IL_0007:  ldloc.0
  IL_0008:  ret
} // end of method Adder::AddTwoNumbers

The only difference between this and the plain static version is the presence of the line:

.custom instance void [mscorlib]System.Runtime.CompilerServices.ExtensionAttribute::.ctor() = ( 01 00 00 00 )

The “this” keyword results in the generation of this attribute, and its purpose is to allow the compiler to flag it as an extension method. (For more on this, see this old post from Scott Hanselman: How do Extension Methods work and why was a new CLR not required?). The actual substance of the method is completely identical.

So, there you have it. As far as the compiler is concerned, the difference between static and extension methods is “extension methods are static methods with an extension method attribute.” Now, I could go into my opinion on which should be used, when, and how, but it would be just that: my opinion. And your mileage and opinion may vary. The fact of the matter is that, from the compiler’s perspective, they’re the same, so when and how you use one versus the other is really just a matter of your team’s preferences, comfort levels, and ideas about readability.

By

How To Put Your Favorite Source Code Goodies on Nuget

A while back, I made a post encouraging people to get fed up every now and then and figure out a better way of doing something. Well, tonight I take my own advice. I am sick and tired of rifling through old projects to find code that I copy and paste into literally every non-trivial .NET solution that I create. There’s a thing for this, and it’s called Nuget. I use it all the time to consume other people’s code, libraries and utilities, but not my own. Nope, for my own, I copy and paste stuff from other projects. Not anymore. This ends now.

My mission tonight is to take a simple bit of code that I add to all my unit test projects and to make it publicly available view Nuget. Below is the code. Pretty straightforward and unremarkable. For about 5 versions of MSTest, I’ve hated the “ExpectedException” attribute for testing that something throws an exception. It’s imprecise. All it tests is that somewhere, anywhere, in the course of execution, an exception of the type in question is thrown. Could be on the first line of the method, could be on the last, could happen in the middle from something nested 8 calls deep in the call stack. Who knows? Well, I want to know and be precise, so here’s what I do instead:

public static class ExtendedAssert
{
    /// Check that a statement throws a specific type of exception
    /// Exception type inheriting from Exception
    /// Action that should throw the exception
    public static void Throws(Action executable) where TException : Exception
    {
        try
        {
            executable();
        }
        catch (Exception ex)
        {
            Assert.IsTrue(ex.GetType() == typeof(TException), String.Format("Expected exception of type {0} but got {1}", typeof(TException), ex.GetType()));
            return;
        }
        Assert.Fail(String.Format("Expected exception of type {0}, but no exception was thrown.", typeof(TException)));
    }

    /// Check that a statement throws some kind of exception
    /// Action that should throw the exception
    /// Optionally specify a message
    public static void Throws(Action executable, string message = null)
    {
        try
        {
            executable();
        }
        catch
        {
            Assert.IsTrue(true);
            return;
        }
        Assert.Fail(message ?? "Expected an exception but none was thrown.");
    }

    /// Check that a statement does not throw an exception
    /// Action to execute
    public static void DoesNotThrow(Action executable)
    {
        try
        {
            executable();
        }
        catch (Exception ex)
        {
            Assert.Fail(String.Format("Expected no exception, but exception of type {0} was thrown.", ex.GetType()));
        }
    }

Now, let’s put this on Nuget somehow. I found my way to this link, with instructions. Having no idea what I’m doing (though I did play with this once, maybe a year and a half ago), I’m going with the GUI option even though there’s also a command line option. So, I downloaded the installer and installed the Nuget package explorer.

From there, I followed the link’s instructions, more or less. I edited the package meta data to include version info, ID, author info, and a description. Then, I started to play around with the “Framework Assemblies” section, but abandoned that after a moment. Instead, I went up to Content->Add->Existing file and added ExtendedAssert. Once I saw the source code pop up, I was pretty content (sorry about the little Grindstone timer in the screenshot — didn’t notice ’til it was too late):

PackageExplorer

Next up, I ran Tools->Analyze Package. No issues found. Not too shabby for someone with no idea what he’s doing! Now, to go for the gusto — let’s publish this sucker. File->Publish and, drumroll please…. ruh roh. I need something called a “Publish Key” to publish it to nuget.org.

PublishKey

But, as it turns out, getting an API key is simple. Just sign up at nuget.org and you get one. I used my Microsoft account to sign up. I uploaded my DaedTech logo for the profile picture and tweaked a few settings and got my very own API key (found by clicking on my account name under the “search packages” text box at the top). There was even a little clipboard logo next to it for handy copying, and I copied it into the window shown above, and, viola! After about 20 seconds, the publish was successful. I’d show you a screenshot, but I’m not sure if I’m supposed to keep the API key a secret. Better safe than sorry. Actually, belay that last thought — you are supposed to keep it a secret. If you click on “More Info” under your API key, it says, and I quote:

Your API key provides you with a token that identifies you to the gallery. Keep this a secret. You can always regenerate your key at any time (invalidating previous keys) if your token is accidentally revealed.

Emphasis mine — turns out my instinct was right. And, sorry for the freewheeling nature of this post, but I’m literally figuring this stuff out as I type, and I thought it might make for an interesting read to see how someone else pokes around at this kind of experimenting.

Okay, now to see if I can actually get that thing. I’m going to create a brand new test project in Visual Studio and see if I can install my beloved ExtendedAssert through Nuget, now.

NugetSuccess

Holy crap, awesome! I’m famous! (Actually, that was so easy that I kind of feel guilty — I thought it’d be some kind of battle, like publishing a phone app or something). But, the moment of truth was a little less exciting. I installed the package, and it really didn’t do anything. My source code file didn’t appear. Hmmm…

After a bit of googling, I found this stack overflow question. Let’s give that a try, optimistically upvoting the question and accepted answer before I forget. I right clicked in the “package contents” window, added a content folder, and then dragged ExtendedAssert into that folder. In order to re-publish, I had to rev the version number, so I revved the patch decimal, since this is a hot patch to cover an embarrassing release if I’ve ever seen one. No time for testing on my machine or a staging environment — let’s slam this baby right into production!

Woohoo! It worked and compiled! Check it out:

NugetInstallSuccess

But, there’s still one sort of embarrassing problem — V1.0.1 has the namespace from whichever project I picked rather than the default namespace for the assembly. That’s kind of awkward. Let’s go back to google and see about tidying that up. First hit was promising. I’m going to try replacing the namespace with a “source code transformation” as shown here:

s

Then, according to the link, I also need to change the filename to ExtendedAssert.cs.pp (this took me another publish to figure out that I won’t bore you with). Let’s rev again and go into production. Jackpot! Don’t believe me? Go grab it yourself.

The Lessons Here

A few things I’ll note at this point. First off, I recall that it’s possible to save these packages locally and for me to try them before I push to Nuget. I should definitely have done that, so there’s a meta-lesson here in that I fell into the classic newbie trap of thinking “oh, this is simple and it’ll just work, so I’ll push it to the server.” I’m three patches in and it’s finally working. Glad I don’t have tens of thousands of users for this thing.

But the biggest thing to take away from this is that Nuget is really easy. I had no idea what I was doing and within an hour I had a package up. For the last 5 years or so, every time I start a new project, I’d shuffle around on the machine to find another ExtendedAssert.cs that I could copy into the new project. If it’s a new machine, I’d email it to myself. A new job? Have a coworker at the old one email it to me. Sheesh, barbaric. And I put up with it for years, but not anymore. Given how simple this is, I’m going to start making little Nuget packages for all my miscellaneous source code goodies that I transport with me from project to project. I encourage you to do the same.

By

Creating a Word Document from Code with Spire

I’d like to tell you a harrowing, cautionary tale of my experience with the MS Office Interop libraries and then turn it into a story of redemption. Just to set the stage, these interop libraries are basically a way of programatically creating and modifying MS Office files such as Word documents and Excel spreadsheets. The intended usage of these libraries is in a desktop environment from a given user account in the user space. The reason for this is that what they actually do is launch MS Word and start piping commands to it, telling it what to do to the current document. This legacy approach works reasonably well, albeit pretty awkwardly from a user account. But what happens when you want to go from a legacy Winforms app to a legacy Webforms app and do this on a web server?

Microsoft has the following to say:

Microsoft does not currently recommend, and does not support, Automation of Microsoft Office applications from any unattended, non-interactive client application or component (including ASP, ASP.NET, DCOM, and NT Services), because Office may exhibit unstable behavior and/or deadlock when Office is run in this environment.

Microsoft says, “yikes, don’t do that, and if you do, caveat emptor.” And, that makes sense. It’s not a great idea to allow service processes to communicate directly with Office documents anyway because of the ability to embed executable code in them.

Sometime back, I inherited an ecosystem of legacy Winforms and Webforms applications and one common thread was the use of these Interop libraries in both places. Presumably, the original author wasn’t aware of Microsoft’s stance on this topic and had gone ahead with using Interop on the web server, getting it working for the moment. I didn’t touch this legacy code since it wasn’t causing any issues, but one day a server update came down the pipeline and *poof* no more functioning Interop. This functionality was fairly important to people, so my team was left to do some scrambling to re-implement the functionality using PDF instead of MS Word. It was all good after a few weeks, but it was a stressful few weeks and I developed battle scars around not only doing things with those Interop libraries and their clunky API (see below) but with automating anything with Office at all. Use SSRS or generate a PDF or something. Anything but Word!

Interop

But recently I was contacted by E-iceblue, who makes document management and conversion software in the .NET space. They asked if I’d take a look at their offering and write-up my thoughts on it. I agreed, as I do agree to requests like this from time to time, but always with the caveat that I’ll write about my experience in earnest and not serve as a platform for a print-based commercial. Given my Interop horror story, the first thing I asked was whether the libraries could work on a server or not, and I was told that they could (presumably, although I haven’t verified, this is because they use the Open XML format rather than the legacy Interop paradigm). So, that was already a win.

I put this request in my back pocket for a bit because I’m already pretty back-logged with post requests and other assorted work, but I wound up having a great chance to try it out. I have a project up on Github that I’ve been pushing code to in order to help with my Pluralsight course authorship. Gist of it is that I create a directory and file structure for my courses as I work on them, and then another for submission, and I want to automate the busy-work. And one thing that I do for every module of every course is create a power point document and a word document for scripting. So, serendipity — I had a reason to generate word documents and thus to try out E-iceblue’s product, Spire.

Rather than a long post with screenshots and all of that, I did a video capture. I want to stress that what you’re seeing here is me, having no prior knowledge of the product at all, and armed only with a link to tutorials that my contact there sent to me. Take a look at the video and see what it’s like. I spend about 4-5 minutes getting setup and, at the end of it, I’m using a nice, clean API to successfully generate a Word document.

I’ll probably have some more posts in the hopper with this as I start doing more things with it (Power Point, more complex Word interaction, conversion, etc). Early returns on this suggest it’s worth checking out, and, as you can see, the barriers to entry are quite low, and I’ve barely scratched the surface of just one line of offerings.

By

Introduction to Static Analysis (A Teaser for NDepend)

Rather than the traditional lecture approach of providing an official definition and then discussing the subject in more detail, I’m going to show you what static analysis is and then define it. Take a look at the following code and think for a second about what you see. What’s going to happen when we run this code?

private void SomeMethod()
{
	int x = 1;
	if(x == 1)
		throw new Exception();
}

Well, let’s take a look:

Exception

I bet you saw this coming. In a program that does nothing but set x to 1, and then throw an exception if x is 1, it isn’t hard to figure out that the result of running it will be an unhandled exception. What you just did there was static analysis.

Static analysis comes in many shapes and sizes. When you simply inspect your code and reason about what it will do, you are performing static analysis. When you submit your code to a peer to have her review, she does the same thing. Like you and your peer, compilers perform static analysis, though automated analysis instead of manual. They check the code for syntax errors or linking errors that would guarantee failures, and they will also provide warnings about potential problems such as unreachable code or assignment instead of evaluation. Products also exist that will check your source code for certain characteristics and stylistic guideline conformance rather than worrying about what happens at runtime and, in managed languages, products exist that will analyze your compiled IL or byte code and check for certain characteristics. The common thread here is that all of these examples of static analysis involve analyzing your code without actually executing it.

Analysis vs Reactionary Inspection

People’s interactions with their code tend to gravitate away from analysis. Whether it’s unit tests and TDD, integration tests, or simply running the application to see what happens, programmers tend to run experiments with their code and then to see what happens. This is known as a feedback loop, and programmers use the feedback to guide what they’re going to do next. While obviously some thought is given to what impact changes to the code will have, the natural tendency is to adopt an “I’ll believe it when I see it” mentality.

private void SomeMethod()
{
	var randomGenerator = new Random();
	int x = randomGenerator.Next(1, 10);
	Console.WriteLine(x);
}

We tend to ask “what happened?” and we tend to orient our code in such ways as to give ourselves answers to that question. In this code sample, if we want to know what happened, we execute the program and see what prints. This is the opposite of static analysis in that nobody is trying to reason about what will happen ahead of time, but rather the goal is to do it, see what the outcome is, and then react as needed to continue.

Reactionary inspection comes in a variety of forms, such as debugging, examining log files, observing the behavior of a GUI, etc.

Static vs Dynamic Analysis

The conclusions and decisions that arise from the reactionary inspection question of “what happened” are known as dynamic analysis. Dynamic analysis is, more formally, inspection of the behavior of a running system. This means that it is an analysis of characteristics of the program that include things like how much memory it consumes, how reliably it runs, how much data it pulls from the database, and generally whether it correctly satisfies the requirements are not.

Assuming that static analysis of a system is taking place at all, dynamic analysis takes over where static analysis is not sufficient. This includes situations where unpredictable externalities such as user inputs or hardware interrupts are involved. It also involves situations where static analysis is simply not computationally feasible, such as in any system of real complexity.

As a result, the interplay between static analysis and dynamic analysis tends to be that static analysis is a first line of defense designed to catch obvious problems early. Besides that, it also functions as a canary in the mine to detect so-called “code smells.” A code smell is a piece of code that is often, but not necessarily, indicative of a problem. Static analysis can thus be used as an early detection system for obvious or likely problems, and dynamic analysis has to be sufficient for the rest.

Canary

Source Code Parsing vs. Compile-Time Analysis

As I alluded to in the “static analysis in broad terms” section, not all static analysis is created equal. There are types of static analysis that rely on simple inspection of the source code. These include the manual source code analysis techniques such as reasoning about your own code or doing code review activities. They also include tools such as StyleCop that simply parse the source code and make simple assertions about it to provide feedback. For instance, it might read a code file containing the word “class” and see that the next word after it is not capitalized and return a warning that class names should be capitalized.

This stands in contrast to what I’ll call compile time analysis. The difference is that this form of analysis requires an encyclopedic understanding of how the compiler behaves or else the ability to analyze the compiled product. This set of options obviously includes the compiler which will fail on show stopper problems and generate helpful warning information as well. It also includes enhanced rules engines that understand the rules of the compiler and can use this to infer a larger set of warnings and potential problems than those that come out of the box with the compiler. Beyond that is a set of IDE plugins that perform asynchronous compilation and offer realtime feedback about possible problems. Examples of this in the .NET world include Resharper and CodeRush. And finally, there are analysis tools that look at the compiled assembly outputs and give feedback based on them. NDepend is an example of this, though it includes other approaches mentioned here as well.

The important compare-contrast point to understand here is that source analysis is easier to understand conceptually and generally faster while compile-time analysis is more resource intensive and generally more thorough.

The Types of Static Analysis

So far I’ve compared static analysis to dynamic and ex post facto analysis and I’ve compared mechanisms for how static analysis is conducted. Let’s now take a look at some different kinds of static analysis from the perspective of their goals. This list is not necessarily exhaustive, but rather a general categorization of the different types of static analysis with which I’ve worked.

  • Style checking is examining source code to see if it conforms to cosmetic code standards
  • Best Practices checking is examining the code to see if it conforms to commonly accepted coding practices. This might include things like not using goto statements or not having empty catch blocks
  • Contract programming is the enforcement of preconditions, invariants and postconditions
  • Issue/Bug alert is static analysis designed to detect likely mistakes or error conditions
  • Verification is an attempt to prove that the program is behaving according to specifications
  • Fact finding is analysis that lets you retrieve statistical information about your application’s code and architecture

There are many tools out there that provide functionality for one or more of these, but NDepend provides perhaps the most comprehensive support across the board for different static analysis goals of any .NET tool out there. You will thus get to see in-depth examples of many of these, particularly the fact finding and issue alerting types of analysis.

A Quick Overview of Some Example Metrics

Up to this point, I’ve talked a lot in generalities, so let’s look at some actual examples of things that you might learn from static analysis about your code base. The actual questions you could ask and answer are pretty much endless, so this is intended just to give you a sample of what you can know.

  • Is every class and method in the code base in Pascal case?
  • Are there any potential null dereferences of parameters in the code?
  • Are there instances of copy and paste programming?
  • What is the average number of lines of code per class? Per method?
  • How loosely or tightly coupled is the architecture?
  • What classes would be the most risky to change?

Believe it or not, it is quite possible to answer all of these questions without compiling or manually inspecting your code in time consuming fashion. There are plenty of tools out there that can offer answers to some questions like this that you might have, but in my experience, none can answer as many, in as much depth, and with as much customizability as NDepend.

Why Do This?

So all that being said, is this worth doing? Why should you watch the subsequent modules if you aren’t convinced that this is something that’s even worth learning. It’s a valid concern, but I assure you that it is most definitely worth doing.

  • The later you find an issue, typically, the more expensive it is to fix. Catching a mistake seconds after you make it, as with a typo, is as cheap as it gets. Having QA catch it a few weeks after the fact means that you have to remember what was going on, find it in the debugger, and then figure out how to fix it, which means more time and cost. Fixing an issue that’s blowing up in production costs time and effort, but also business and reputation. So anything that exposes issues earlier saves the business money, and static analysis is all about helping you find issues, or at least potential issues, as early as possible.
  • But beyond just allowing you to catch mistakes earlier, static analysis actually reduces the number of mistakes that happen in the first place. The reason for this is that static analysis helps developers discover mistakes right after making them, which reinforces cause and effect a lot better. The end result? They learn faster not to make the mistakes they’d been making, causing fewer errors overall.
  • Another important benefit is that maintenance of code becomes easier. By alerting you to the presence of “code smells,” static analysis tools are giving you feedback as to which areas of your code are difficult to maintain, brittle, and generally problematic. With this information laid bare and easily accessible, developers naturally learn to avoid writing code that is hard to maintain.
  • Exploratory static analysis turns out to be a pretty good way to learn about a code base as well. Instead of the typical approach of opening the code base in an IDE and poking around or stepping through it, developers can approach the code base instead by saying “show me the most heavily used classes and which classes use them.” Some tools also provide visual representations of the flow of an application and its dependencies, further reducing the learning curve developers face with a large code base.
  • And a final and important benefit is that static analysis improves developers’ skills and makes them better at their craft. Developers don’t just learn to avoid mistakes, as I mentioned in the mistake reduction bullet point, but they also learn which coding practices are generally considered good ideas by the industry at large and which practices are not. The compiler will tell you that things are illegal and warn you that others are probably errors, but static analysis tools often answer the question “is this a good idea.” Over time, developers start to understand subtle nuances of software engineering.

There are a couple of criticisms of static analysis. The main ones are that the tools can be expensive and that they can create a lot of “noise” or “false positives.” The former is a problem for obvious reasons and the latter can have the effect of counteracting the time savings by forcing developers to weed through non-issues in order to find real ones. However, good static analysis tools mitigate the false positives in various ways, an important one being to allow the shutting off of warnings and the customization of what information you receive. NDepend turns out to mitigate both: it is highly customizable and not very expensive.

Reference

The contents of this post were mostly taken from a Pluralsight course I did on static analysis with NDepend. Here is a link to that course. If you’re not a Pluralsight subscriber but are interested in taking a look at the course or at the library in general, send me an email to erik at daedtech and I can give you a 7 day trial subscription.