DaedTech

Stories about Software

By

Let’s Build a Metric 3: Compositeness

Last time, I talked through a little bit of housekeeping on the way to creating a metric that would be, uniquely, ours.  Nevermind the fact that, under the hood, the metric is still lines of code.  It now has a promising name and is expressed in the units we want.  And, I think that’s okay.  There is a lot of functionality and ground to cover in NDepend, so a steady, measurable pace makes sense.

It’s time now to start thinking about the nature of the metric to be created here, which is essentially a measure of time.  That’s pretty ambitious because it contains two components: defining a composite metric (recall that this is a mathematical transform on measured properties) and then tying it an observed outcome via experimentation.  In this series, I’m not assuming that anyone reading has much advanced knowledge about static analysis and metrics, so let’s get you to the point where you grok a composite metric.  We’ll tackle the experimentation a little later.

A Look at a Composite Metric

I could walk you through creating a second query under the “My Metrics” group that we created, but I also want this to be an opportunity to explore NDepend’s rich feature set.  So instead of that, navigate to NDpened->Metric->Code Quality->Types with Poor Cohesion.

Metric3

When you do that, you’re going to see a metric much more complicated than the one we defined in the “Queries and Rules Edit” window.  Here’s the code for it, comments and all.

// Types with poor cohesion
warnif count > 0 from t in JustMyCode.Types where 
  (t.LCOM > 0.8 || t.LCOMHS > 0.95) && 
  t.NbFields > 10 && 
  t.NbMethods > 10 
  orderby t.LCOM descending, t.LCOMHS descending
select new { t, t.LCOM, t.LCOMHS, 
                t.NbMethods, t.NbFields }

// Types where LCOM > 0.8 and NbFields > 10 
// and NbMethods > 10 might be problematic. 
// However, it is very hard to avoid such 
// non-cohesive types. The LCOMHS metric
// is often considered as more efficient to 
// detect non-cohesive types.
// See the definition of the LCOM metric here 
// http://www.ndepend.com/Metrics.aspx#LCOM

There’s a good bit to process here.  The CQLinq code here is inspecting Types and providing data on Types.  “Type” here means any class or struct in your code base (well, okay, in my code base), along with a warning if you see anything that matches.  And, what does matching mean?  Well, looking at the compound conditional statement, a type matches if it has “LCOM” greater than .8 or “LCOMHS” greater than .95 and it also has more than 10 fields and 10 methods.  So, to recap, poor cohesion means that there are a good number of fields, a good number of methods, and… something… for these acronyms.

LCOM stands for “Lack [of] Cohesion of Methods.”  If you look up cohesion in the dictionary, you’ll find the second definition particularly suited for this conversation: “cohering or tending to cohere; well-integrated; unified.”  We’d say that a type is “cohesive” if it is unified in purpose or, to borrow from the annals of clean code, if it conforms to the single responsibility principle.  To get a little more concrete, consider an extremely cohesive class.

public class Recipe
{
    private List _ingredients = new List();
 
    public void AddIngredient(Ingredient ingredient)
    {
        _ingredients.Add(ingredient);
    }
 
    public void StartOver()
    {
        _ingredients.Clear();
    }
 
    public IEnumerable GetAllIngredients()
    {
        return _ingredients;
    }
}

This class is extremely cohesive.  It has one field and three methods, and every method in the class operates on the field.  Type cohesion might be described as “how close do you get to every method operating on every field?”

Now, here’s the crux of our challenge in defining a composite metric: how do you take that anecdotal, qualitative description, and put a number to it?  How do you get from “wow, Recipe is pretty cohesive” to 0.0?

I originally wrote this post for the NDepend blog.  Click here to read the rest and hear me talk about going from qualitative to quantitative.

 

By

Make the Complex Simple

For many years, I associated the concept of “making the complex simple” with teaching. And that’s certainly not wrong. We’re in an industry filled with complexity, both essential and accidental. To survive in this industry requires understanding essential complexity and eliminating accidental complexity, and novices struggle with this. As developers become self-sufficient, they figure out complexity reduction enough that they can mentor others in the concept. Once they get to the point of teaching concepts pretty seriously — giving conference talks, creating courses, coaching, etc. — it can definitely be said they’ve become good at “making the complex simple.”

Of course, it could also be said that the term applies to communications with non-technical stakeholders and not just teaching inexperienced developers. Think fast — how would you explain to the CIO who doesn’t have a programming background why you should stop delivering features for a couple of weeks in order to retrofit an IoC container onto your codebase? If you start saying things like “inject your dependencies” and “switch your database driver without recompiling,” you’re keeping the complex complex as the CIO stares blankly at you. Making it simple isn’t easy, is it?

To take complicated concepts and communicate them simply, with minimized loss of pertinent information, is a skill you could (and should) spend a lifetime improving. It requires overcoming the curse of knowledge, understanding your subject matter extensively, knowing your target audience’s world fairly well, being  adept at mapping concepts and creating analogies, communicating clearly and, oh yeah, often doing it all off the cuff. Piece of cake, right?

Hard though it may be, it’s a skill worth developing.

I originally wrote this post for John Sonmez’s site, Simple Programmer.  Click here to read the rest of my argument as to why you should develop this skill.

By

Get Good at Testing Your Own Software

There’s a conventional wisdom that says software developers can’t test their own code.  I think it’s really more intended to say that you can’t meaningfully test the behavior of software that you’ve written to behave a certain way.  The reasoning is simple enough.  If you write code with the happy path in mind, you’ll always navigate the happy path when testing it, being hoodwinked by a form of confirmation bias.

To put it more concretely, imagine that you write a piece of code that reads a spreadsheet, tabulates sums and averages, and reports these to a user.  As you build out this little application, one of the first things you’ll do is get it successfully reading the file so that you can write the other parts of the application that depend on this prerequisite.  Over the course of your development, you’ll be less likely to test all of the things that can go wrong with reading the spreadsheet because you’ll develop kind of a muscle memory of getting the import right as you move on to test the averages and sums on which you’re concentrating.  You won’t think, “what if the columns are switched around” or “what if I pass in a Word document instead of a spreadsheet?”

Because of this effect, we’re scolded not to test our own software.  That’s why QA departments exist.  They have the proper perspective and separation so as not to be blinded by their knowledge of how things are supposed to go.  But does this really mean that you can’t test your own software?  It may be that others are more naturally suited to do it, but you can certainly work to reduce the size and scope of your own blind spot so that you can be more effective in situations where circumstances press you into testing your own code.  You might be doing a passion project on the side or be the only technical member of a startup – you won’t always have a choice.

Let’s take a look at some techniques that will help you be more effective at testing software, whether written by you or someone else.  I originally wrote this post for the Infragistics blog, so click here to read about the techniques to which I’m referring.

By

A Test Coverage Primer for Managers

Managing Blind

Let me see if I can get in your head a little bit.  You manage a team of developers and it goes well at times.  But, at other times, not so much.  Deadlines slide past without a deliverable, weird things happen in production and you sometimes get deja vu when a bug comes in that you could swear the team fixed 3 months ago.

What do you do?  It’s a maddening problem because, even though you may once have been technical, you can’t really dive into the code and see what’s going on.  Are there systemic problems with the code base, or are you just experiencing normal growing pains?  Are the developers in your group painting an accurate picture of what’s going on?  Are they all writing good code?  Are they writing decent code?  Are any of them writing decent code?  Can you rely on them to tell you?

As I said, it’s maddening.  And it’s hard.  It’s like coaching a sports team where you’re not allowed to watch the game.  All you can do is rely on what the players tell you is going on in the game.

A Light in the Darkness

And then, you light upon a piece of salvation: automated unit tests.  They’re perfect because, as you’ll learn from modern boilerplate, they’ll help you guard against regressions, prevent field defects, keep your code clean and modular, and plenty more.  You’ve got to get your team to start writing tests and start writing them now.

But you weren’t born yesterday.  Just writing tests isn’t sufficient.  The tests have to be good.  And, so you light upon another piece of salvation: measuring code coverage.  This way, not only do you know that developers are writing tests, but you know that you’re covered.  Literally.  If 95% of your code base is covered, that probably means that you’re, like, 95% good, right?  Okay, you realize it’s maybe not quite that simple, but, still, it’s a nice, comforting figure.

Conversely, if you’re down at, say, 20% coverage, that’s an alarming figure.  That means that 80% of your code is not covered.  It’s the Wild West, and who knows what’s going on out there?  Right?  So the answer becomes clear.  Task a team member with instrumenting your build to measure automated test coverage and then dump it to a readout somewhere that you can monitor.

Time to call the code quality issue solved (or at least addressed), and move on to other matters.  Er, right?

I originally wrote this post for the NDepend blog.  Click here to read the rest.

By

Salary Negotiation without Bridge Burning

Before the regular post, a bit of housekeeping.  I’m getting married tomorrow and then heading off for a honeymoon where I’ll be largely off the grid.  I’ve scheduled posts to go out during that time and social media blasts to announce them, but in case things get weird, know that the ship is on auto-pilot.  If you tweet/message/email me or submit questions, know that I won’t be back home and playing catch-up until late September.

With that out of the way, I’m going to make my last post before leaving a post about something people frequently ask me.  This actually isn’t a reader question submission, per se, but I’m asked about it enough that it might as well be.  If you’ll recall, a while back, I talked about how to negotiate with your employer by suggesting that you should negotiate for non-monetary perks with a lot more value than whatever pittance you’ll claw out of them.  But what if you really want or need more money?

I was reading this post on Simple Programmer by Xavier Morera the other day.  In it, he mentioned walking away from a job and having them offer to double his salary, and this struck a chord with me as a valuable lesson for how to negotiate with employers if what you really want is more money.  You’re probably thinking that a doubling of salary sounds outlandish, and, while that is unusual, it’s not as crazy as you think because of Xavier’s circumstances, which are likely different than yours.  They’re quite probably different because he wasn’t looking for the money and walked away from it.  You’re looking for it.

So how can you set yourself up to negotiate when you really want that money and you want to stick around the company for a while?

Remedial Opportunist Raise Negotiation

Something I still see people do that just makes me cringe is to secure a competing offer from another company and use it as raise leverage.  Often this happens after someone unsuccessfully pushes for more money, but they might just opportunistically go out and do it.  This is, strictly speaking, an opportunist move.  (If you’re not familiar with my definition of the company hierarchy, you should read about it to understand what I mean by “opportunist.”)  But it’s a move by someone who is bad at being an opportunist and probably destined for flameout followed by checked out pragmatism.  Opportunists are risk tolerant, but this type of leveraging is an extreme short term, prospect-killing play.

Put yourself in a manager’s shoes and imagine the conversation.  Bob saunters in, sits down and says, “so, I just got a really tempting offer from Initrode and it’s actually for 10K a year more than I make here.  I’d really like to stay, but it’s hard to leave that money on the table.  If you could match it… look, I’m sorry to put you in this spot, but it just sort of happened.”  What’s your next thought?

I’ll tell you what mine is in this situation.  “Yeah, it just ‘happened,’ huh, Bob?  You tripped, fell, called in sick here a couple of times, got dressed up in a suit, scheduled multiple rounds of interview with several people, received an offer letter, negotiated to a final offer and walked in here with it to show me?  Yeah, that’s a crazy coincidence!”  Bob is wearing a smarmy smile and insulting my intelligence.  This didn’t “just happen” — it was a calculated piece of leverage, executed at a time when Bob leaving would be an operational problem.  Bob’s strongest case for increased compensation was soft blackmail.

HighMaintenance

Read More