DaedTech

Stories about Software

By

Have You Seen the NDepend API?

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and give it a try.

If you’re familiar with NDepend, you’re probably familiar with the Visual Studio plugin, the out of the box metrics, the excellent visualization tools, and the iconic Zone of Uselessness/Zone of Pain chart.  These feel familiar to NDepend users and have likely found their way into the normal application development process.

NDepend has other features as well, however, some of which I do not necessarily hear discussed as frequently.  The NDepend API has membership in that “lesser known NDepend features club.”  Yes, that’s right — if you didn’t know this, NDepend has an API that you can use.

You may be familiar, as a user, with the NDepend power tools.  These include some pretty powerful capabilities, such as duplicate code detection, so it stands to reason that you may have played with them or even that you might routinely use them.  But what you may not realize is the power tools’ source code accompanies the installation of NDepend, and it furnishes a great series of examples on how to use the NDepend API.

NDepend’s API is contained in the DLLs that support the executable and plugin, so you needn’t do anything special to obtain it.  The NDepend website also treats the API as a first class citizen, providing detailed, excellent documentation.   With your NDepend installation, you can get up and running quickly with the API.

Probably the easiest way to introduce yourself is to open the source code for the power tools project and to add a power tool, or generally to modify that assembly.  If you want to create your own assembly to use the power tools, you can do that as well, though it is a bit more involved.  The purpose of this post is not to do a walk-through of setting up with the power tools, however, since that can be found here.  I will mention two things, however, that are worth bearing in mind as you get started.

  1. If you want to use the API outside of the installed project directory, there is additional setup overhead.  Because it leverages proprietary parts of NDepend under the covers, setup is more involved than just adding a DLL by reference.
  2. Because of point (1), if you want to create your own assembly outside of the NDepend project structure, be sure to follow the setup instructions exactly.

Read More

By

Making Devs, Architects, and Managers Happy with the Static Analysis Tool

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  I use NDepend to drive my custom static analysis of .NET codebases — you should check it out.

Organizations come to possess static analysis tools in a variety of ways.  In some cases, management decides on some kind of quality initiative and buys the tool to make it so.  Or, perhaps some enterprising developers sold management on the tool and now, here it is.  Maybe it came out from the architecture group’s budget.

But however the tool arrives, it will surprise people.  In fact, it will probably surprise most people.  And surprises in the corporate context can easily go over like a lead balloon.  So the question becomes, “how do we make sure everyone feels happy with the new tool?”

A Question of Motivation

Before we can discuss what makes people happy, we need to look first at what motivates them.  Not all folks in the org will share motivation — these will generally vary by role.  The specific case of static analysis presents no exception to this general rule.

When it comes to static analysis, management has the simplest motivation.  Managers lack the development team’s insights into the codebase.  Instead, they perceive it only in terms of qualitative outcomes and lagging indicators.  Static analysis offers them a unique opportunity to see leading indicators and plan for issues around code quality.

Architects tend to view static analysis tools as empowering for them, once they get over any initial discomfort. Frequently, they define patterns and practices for teams, and static analysis makes these much easier to enforce.  Of course, the tool might also threaten the architects, should its guidance be at odds with their historical proclamations about developer practice.

For developers, complexity around motivation grows.  They may share the architects’ feelings of threat, should the tool disagree with historical practice.  But they may also feel empowered or vindicated, should it give them voice for a minority opinion.  The tool may also make them feel micromanaged if used to judge them, or empowered if it affords them the opportunity to learn.

Read More

By

Static Analysis for the Build Machine?

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re over there, download a trial of NDepend if you want to see some quantification of the tech debt in your codebase.

I remember my earliest experiences with static analysis.  Probably a decade ago, I started to read about it during grad school and to poke around with it at work.  Immediately, I knew I had discovered a powerful advantage for programmers.  These tools automated knowledge.

While I felt happy to share the knowledge with coworkers, their lack of interest didn’t disappoint me.  After all, it felt as though I had some sort of trade secret.  If those around me chose not to take advantage, I would shine by comparison.  (I have since, I’d like to think, matured a bit.)  Static analysis became my private competitive advantage — Sabermetrics for programmers.

So as you can imagine, running it on the build machine would not have occurred to me.  And that assumes a sophisticated enough setup that doing so made sense (not really the case back then).  Static analysis was my ace in the hole for writing good code — a personal choice and technique.

Fast forward a decade.  I have now grown up, worked with many more teams, and played many more roles.  And, of course, the technological landscape has changed.  All of that combined to cause a complete reversal of my opinion.  Static analysis and its advantages matter far too much not to use it on the build machine.  Today, I’d like to expand on that a bit.

What Does It Mean to Have Static Analysis on the Build Machine?

First point of expansion should be to mention what this means.  In the general sense, this means that the machine responsible for assembling your code into something deployable executes static analysis on said code.  Back when I first discovered static analysis, this would have meant pulling the code from source on that machine, running the analysis and building it.  These days, hopefully your build involves more automation than this.

To get a bit more detailed, a commit may trigger your build machine to kick off a build.  That could just mean compiling your code and packing it into a deployable format.  But shops commonly do other things, such as execute automated unit tests and check on code coverage.  And you can add static analysis to the mix.

You can certainly get creative with how you do this.  But two common (and not mutually exclusive) approaches include having the build generate a code health report and failing the build on egregious violations.

Read More

By

The Relationship between Static Analysis and Continuous Testing

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and give it a try.

As an adult, I have learned that I have an introvert type personality.  I do alright socially, don’t mind public speaking, and do not (I don’t think) present as an awkward person.  So, learning about this characterization surprised me somewhat, but only until I fully understood.

I won’t delve into the finer points of human psychology here, but suffice it to say that introverts prefer to process and grok questions before responding.  This describes me to a tee.  However, working as a consultant and giving frequent advice clashes with this and has forced me to develop somewhat of a knack for answering extemporaneously.  Still, you might ask me just the right question to cause me to cock my head, blink at you, and frown.

I received just such a question the other day.  The question, more or less, was, “if we have continuous testing, do we really need static analysis?”  And, just like that, I was stumped.  This didn’t square, and I wanted time to think on that.  Luckily, I’ve had a bit of time.  (This is why I love blogging.)

Continuous Testing, Defined

Before we go into the relationship between the concepts, let’s first clarify them.  That way we’ll have no inadvertent misunderstandings via buzz word.

My first introduction to continuous testing was through a tool called NCrunch.  It bills itself as an “automated concurrent testing tool,” which certainly offers more precision than “continuous testing.”  NCrunch is awesome.  If you practice TDD or have unit tests, give it a look.  It runs your tests continuously as you write code, providing you real-time, in-IDE, visual feedback as you make changes.  Accidentally delete a line and watch the side of your editor window go immediately red.

In the interceding years, I have seen a broadening of this term to be a follower for concepts such as continuous integration (CI) and continuous deployment (CD).  In agile environments, we integrate constantly and, ideally, deploy (somewhere) constantly.  Why give testing short shrift?  With continuous testing, your environments constantly pepper your build candidates with runtime tests, providing early feedback instead of near the end of the sprint.

So we have two concepts that we can generalize.  The first concept involves tightening the unit test feedback loop, while the second involves the same for integration and acceptance tests.  In both cases, we consistently test our code’s runtime behavior with a fast-feedback loop.

Read More

By

Don’t Just Flag It — Fix It!

Editorial Note: I originally wrote this post for the SubMain blog.  You can check out the original here, at their site.  While you’re there, download a trial of CodeIt.Right.

More years ago than I’d care to admit, I took a software engineering course as part of my graduate CS program.  At the time, I worked a full time job during the day and did remote classes in the evening.  As a result, I disproportionately valued classes with applicability to my job.  And this class offered plenty of that.

We scratched the surface on such diverse topics as agile methodologies, automated testing, cost of code ownership, and more.  But I found myself perhaps most interested by the dive we did into refactoring.  The idea of reworking internal structure of code while preserving inputs and outputs is a surprisingly complex one.

Historical Complexity of Refactoring

At the risk of dating myself, I took this course in the fall of 2006.  While automated refactorings in your IDE now seem commonplace, back then, they were hard.  In fact, the professor of the course considered them to be sufficiently difficult as to steer a group of mine away from a project implementing some.  In the world of 2006, I suspect he had the right of it.  We steered clear.

In 2016, implementing automated refactorings still presents a challenge.  But modern tool and IDE vendors can stand on the shoulders of giants, so to speak.  Back then?  Not so much.

Refactorings present a unique challenge to tool vendors because of the inherent risk.  They can really screw up users’ code.  If a mistake happens, best case scenario is that the resultant code fails to compile because then, at least, it fails fast.  Worse still is semantically and syntactically correct code that somehow behaves improperly.  In this situation, a refactoring — a safe change to code — becomes a modification to the behavior of production code instead.  Ouch.

On top of the risk, the implementation of refactoring anywhere beyond the trivial involves heady concepts such as abstract syntax trees.  In other words, it’s not for lightweights.  So to recap, refactoring is risky and difficult.  And this is the landscape faced by tool authors.

I Don’t Fix — I Just Flag

If you live in the US, you may have seen a commercial that features a funny quip.  If I’m not mistaken, it advertises for some sort of fraud prevention services.  (Pardon any slight inaccuracies, as I recount this as best I can, from memory.)

In the ad, bank robbers hold a bank hostage in a rather cliche, dramatic scene.  Off to the side, a woman stands near a security guard, asking him why he didn’t do anything to stop it.  “I’m not a robbery prevention service — I’m a robbery monitoring service.  Oh, by the way, there’s a robbery.”

It brings a chuckle, but it also brings an underlying point.  In many situations, monitoring alone can prove woefully ineffective, prompting frustration.  As a former manager and current consultant, I generally advise people that they should only point out problems when they have also prepared solution proposals.  It can mean the difference between complaining and solving.

So you can imagine and probably share my frustration at tools that just flag problems and leave it to you to investigate further and fix them.  We feel like the woman standing next to the “robbery monitor,” wondering how useful the service is to us.

Levels of Solution

Going back to the subject of software development, we see this dynamic in a number of places.  The compiler, the IDE, productivity ad-ins, static analysis tools, and linting utilities all offer us warnings to heed.

Often, that’s all we get.  The utility says, “hey, something is wrong here, but you’re going to have to figure out what.”  I tend to think of that as the basic level of service, or level 0, if you will.

The next level, level 1, involves at least offering some form of next action.  It might be as simple as offering a help file, inline reading, or a link to more information.  Anything above “this is a problem.”

Level 2 ups the ante by offering a recommendation for what to do next.  “You have a dependency cycle.  You should fix this by looking at these three components and removing one mutual dependency.”  It goes beyond giving you a next thing to do and gives you the next thing to do.

Level 3 rounds out the field by actually performing the action for you (following a prompt, of course).  “You’ve accidentally hidden a method on the parent class.  Click here to rename or click here to make parent virtual.”  That’s just an example off the top, of course, but it illustrates the interaction paradigm.  “We’ve noticed a problem and you can click here to fix it.”

Fixes in Your Tooling

When evaluating your own tools, look to climb as high up this hierarchy as you can.  Favor tools that identify problems, but offer fixes whenever possible.

There are a number of such tools out there, including CodeIt.Right.  Using tools like this is a pleasure, because it removes the burden of research and implementation from you.  Well, you can always do the research if you want, but at your own leisure.  But it’s much better to do research at your leisure than when you’re trying to accomplish something else.

The other, important concern here is that you find trusted tooling to help you with this sort of thing.  After all, you don’t want something messing with your source code if it might mess up your source code.  But, assuming you can trust it, this provides an invaluable boost to your effectiveness by automatically resolving your problems and by helping you learn.

In the year 2016, we have far more tooling available, with a far better track record, than we did in 2006.  Leverage it whenever possible so that you can focus on solving the pressing problems of your day to day work.