DaedTech

Stories about Software

By

Productivity Add-Ins: Bruce Lee vs Batman

In the last few months, I’ve seen a number of tweets and posts decrying or at least cautioning against the use of productivity tools (e.g. CodeRush, ReSharper, and JustCode). The reasoning behind this is is generally some variant of the notion that such tools are more akin to addictive drugs than to sustainable life improvements. Sure, the productivity tool is initially a big help, but soon you’re useless without it. If, on the other hand, you had just stuck to the simple, honest, clean living of regular development, you might not have reached the dizzying highs of the drug, but neither would you have experienced crippling dependency and eventual rock bottom. The response from those who disagree is also common: insistence that use doesn’t equal dependence and the extolling of the virtues of such tools. Predictably, this back and forth usually degenerates into Apple v. Android or Coke v. Pepsi.

Before this degeneration, though, the debate has some fascinating overtones. I’d like to explore those overtones a bit to see if there’s some kind of grudging consensus to be reached on the subject (though I recognize that this is probably futile, given the intense cognitive bias of endowment effect on display when it comes to add-ins and enhancements that developers use). At the end of the day, I think both outlooks are born out of legitimate experience and motivation and offer lessons that can at least lead to deciding how much to automate your process with eyes wide open.

Also, in the interests of full disclosure, I am an enthusiastic CodeRush fan, so my own personal preference falls heavily on the side of productivity tools. And I have in the past plugged for it too, though mainly for the issue static analysis purpose rather than any code generation. That being said, I don’t actually care whether people use these tools or not, nor do I take any personal affront to someone reading that linked post, deciding that I’m full of crap, and continuing to develop sans add-ins.

The Case Against the Tools

There’s an interesting phenomenon that I’ve encountered a number of times in a variety of incarnations. In shops where there is development of one core product for protracted periods of time, you meet workaday devs who have not actually clicked “File->New” (or whatever) and created a new project in months or years. Every morning they come in at 9, every evening they punch out at 5, and they know how to write code, compile it, and run the application, all with plenty of support from a heavyweight IDE like Eclipse or Visual Studio. The phenomenon that I’ve encountered in such situations is that occasionally something breaks or doesn’t work properly, and I’ve said, “oh, well, just compile it manually from the command line,” or, “just kick off the build outside of the IDE.” This elicits a blank stare that communicates quite effectively, “dude, wat–that’s not how it works.”

When I’ve encountered this, I find that I have to then have an awkward conversation with a non-entry-level developer where I explain the OS command line and basic program execution; the fact that the IDE and the compiler are not, in fact, the same thing; and other things that I would have thought were pretty fundamental. So what has gone wrong here? I’d say that the underlying problem is a classic one in this line of work–automation prior to understanding.

AngryArchLet’s say that I work in a truly waterfall shop, and I get tired of having to manually destroy all code when any requirement changes so that I can start over. Watching for Word document revisions to change and then manually deleting the project is a hassle that I’ve lived with for one day too long, so I fire up the plugin project template for my IDE and write something that runs every time I start. This plugin simply checks the requirements document to see if it has been changed and, if so, it deletes the project I’m working on from source control and automatically creates a new, blank one.

Let’s then say this plugin is so successful that I slap it into everyone’s IDE, including new hires. And, as time goes by, some of those new hires drift to other departments and groups, not all of which are quite as pure as we are in their waterfall approach. It isn’t long before some angry architect somewhere storms over, demanding to know why the new guy took it upon himself to delete the entire source control structure and is flabbergasted to hear, “oh, that wasn’t me–that’s just how the IDE works.”

Another very real issue that something like a productivity tool, used unwisely, can create is to facilitate greatly enhanced efficiency at generating terrible code (see “Romance Author“). A common development anti-pattern (in my opinion) that makes me wince is when I see someone say, “I’m sure generating a lot of repetitive code–I should write some code that generates this code en masse.” (This may be reasonable to do in some cases, but often it’s better to revisit the design.) Productivity tools make this much easier and thus more tempting to do.

The lesson here is that automation can lead to lack of understanding and to real problems when the person benefiting doesn’t understand how the automation works or if and why it’s better. This lack of understanding leads to a narrower view of possible approaches. I think a point of agreement between proponents and opponents of tools might be that it’s better to have felt a pain point before adopting the cure for it rather than just gulping down pain medication ‘preventatively’ and lending credence to those saying the add-ins are negatively habit-forming. You shouldn’t download and use some productivity add-in because all of the cool kids are doing it and you don’t want to be left out of the conversations with hashtag #coderush

The Case for the Tools

The argument from the last section takes at face value the genuine concerns of those making it and lends them benefit of the doubt that their issue with productivity tools is truly concern for bad or voodoo automation. And I think that requires a genuine leap of faith. When it comes to add-ins, I’ve noticed a common thread between opponents of that and opponents of unit tests/TDD–often the most vocal and angry opponents are ones that have never tried it. This being the case, the waters become a little bit muddied since we don’t know from case to case if the opponent has consistently eschewed them because he really believes his arguments against them or if he argues against them to justify retroactively not having learned to use them.

And that’s really not a trivial quibble. I can count plenty of detractors that have never used the tools, but what I can’t recall is a single instance of someone saying, “man, I used CodeRush for years and it really made me worse at my job before I kicked the habit.” I can recall (because I’ve said) that it makes it annoying for me to use less sophisticated environments and tooling, but I’d rather the tide rise and lift all of the boats than advocate that everybody use notepad or VI so that we don’t experience feature envy if we switch to something else.

NorthPolePigeonThe attitude that results from “my avoidance of these tools makes me stronger” is the main thing I was referring to earlier in the post when I mentioned “fascinating overtones.” It sets the stage for tools opponents to project a mix of rugged survivalist and Protestant Work Ethic. Metaphorically speaking, the VI users of the world sleep on a bed of brambles because things like beds and not being stabbed while you sleep are for weaklings. Pain is gain. You get the sense that these guys refuse to eat anything that they didn’t either grow themselves or shoot using a homemade bow and arrow fashioned out of something that they grew themselves.

But when it comes to programming (and, more broadly, life, but I’ll leave that for a post in a philosophy blog that I will never start) this affect is prone to reductio ad absurdum. If you win by leaving productivity tools out of your Visual Studio, doesn’t the guy who uses Notepad++ over Visual Studio trump you since he doesn’t use Intellisense? And doesn’t the person who uses plain old Notepad trump him, since he doesn’t come to rely on such decadent conveniences as syntax highlighting and auto-indentation? And isn’t that guy a noob next to the guy who codes without a monitor the way that really, really smart people in movies play chess without looking at the board? And don’t they all pale in comparison to someone who lives in a hut at the North Pole and sends his hand-written assembly code via carrier pigeon to someone who types it into a computer and executes it (he’s so hardcore that he clearly doesn’t need the feedback of running his code)? I mean, isn’t that necessary if you really want to be a minimalist, 10th degree black-belt, Zen Master programmer–to be so productive that you fold reality back on itself and actually produce nothing?

The lesson here is that pain may be gain when it comes to self-growth and status, but it really isn’t when the pain makes you slower and people are paying for your time. Avoiding shortcuts and efficiency so that you can confidently talk about your self reliance not only fails as a value-add, but it’s inherently doomed to failure since there’s always going to be some guy that can come along and trump you in that “disarms race.” Doing without has no intrinsic value unless you can demonstrate that you’re actively being hampered by a tool.

So What’s the Verdict?

I don’t know that I’ve covered any ground-breaking territory except to point out that both sides of this argument have solutions to real but different problems. The minimalists are solving the problem of specious application of rote procedures and lack of self-reliance while the add-in people are solving the problem of automating tedious or repetitive processes. Ironically, both groups have solutions for problems that are fundamental to the programmer condition (avoiding doing things without knowing why and avoiding doing things that could be done better by machines, respectively). It’s just a question of which problem is being solved when and why it’s being solved, and that’s going to be a matter of discretion.

Add-in people, be careful that you don’t become extremely proficient at coding up anti-patterns and doing things that you don’t understand. Minimalist people, recognize that tools that others use and you don’t aren’t necessarily crutches for them. And, above all, have enough respect for one another to realize that what works for some may not work for others. If someone isn’t interested in productivity tools or add-ins and feels more comfortable with a minimalist setup, who are any of us to judge? I’ve been using CodeRush for years, and I would request the same consideration–please don’t assume that I use it as a template for 5,000 line singletons and other means of mindlessly generating crap.

At the end of the day, whether you choose to fight bad guys using only your fists, your feet, and a pair of cut-off sweat shorts or whether you have some crazy suit with all manner of gizmos and gadgets, the only important consideration when all is said and done is the results. You can certainly leave an unconscious and dazed pile of ne’er-do-wells in your wake either way. Metaphorically speaking, that is–it’s probably actually soda cans and pretzel crumbs.

By

You Need CodeRush

Oops

The other day, I was chatting with some developers and one of them pulled me over to show me the following code:

private void SetLayout(Page page)
{
    if (null == page)
    {
        return;
    }
}

I did a double take and then smirked a bit, pointing out that this was probably dead code. I asked the person who was showing me this to do a “find all references” and there actually was one:

private void mainFrameNavigated(object sender, System.Windows.Navigation.NavigationEventArgs e)
{
    Page page = e.Content as Page;
    if (page != null)
    {
        SetLayout(page);
    }
}

So, we then backed it up a step and discovered that this was an event handler for an event declared in XAML (this whole thing was in a code-behind file). So, every time some event occurred (some sort of navigation thing), the code would cast the sender as a page, call a method with that page, and then return if the page was null, and also return if the page wasn’t null.

How Did It Come To This?

Looking at the SetLayout method, I quickly developed a hypothesis as to how the code wound up this way. The “SetLayout” method probably checked the page for null as a precondition prior to performing some sort of action on Page. At some point, that action became undesirable and was removed from the code, but whoever removed it didn’t realize that the precondition check was now meaningless, as was the whole chain of code above it for the event handler. This was confirmed correct by a quick glance through source control.

Here’s what this looks like in my IDE:

Do you see the dashes on the right, next to the scroll bar? Those are the issues in the CodeRush “issues list”. Do you see the squiggly underline of “SetLayout” (hard to see at this resolution, but you can zoom in or open the image in a new tab if you want)? That means there’s an issue here. In the interests of full disclosure, this is actually to tell me that the method could be static (it operates on no instance member of the class) rather than to tell me that the method has no effect on anything, but the important thing is that it’s underlined. It draws my attention to the method instead of encouraging me to skip over it while looking for something else here. And, as soon as this method catches anyone’s attention, it will obviously be deleted.

The Value of Productivity Tools

In general, the Code Rush issues list is a great companion. It alerts me to dead code, undisposed disposables, redundant statements, and plenty of other things. If you pause any time it tells you about an issue, you will become better at programming and you will start to recognize more problems. It isn’t necessarily superior to using, say, Style Cop or any other particular tool, but it is very convenient in that it paints the problems right inline with the IDE. They’re quite noticeable.

Code Rush’s competitor, Resharper does this as well. I plug for Code Rush because I use it and I love it, but if you get Resharper, that’s fine — just get something. These productivity tools are game changers. When looking at this code in someone else’s IDE, there was no squiggly and no immediately obvious problem if you’re casually scrolling through the class. But in my IDE, my eyes are drawn right to it, allowing and in fact demanding that I fix it. This sort of boy-scouting makes you better at writing your own code and better at fixing other people’s.

I forget what the license costs, but your employer can afford it. Heck, if they’re too cheap, you can afford it. Because what you can’t afford is to have code checked in under your name that looks like the code in this post. That’s no good, and it’s avoidable.

By

TDD and CodeRush

TDD as a Practice

The essence of Test Driven Development (TDD) can be summarized most succinctly as “red, green, refactor”. Following this practice will tend to make your code more reliable, cleaner, and better designed. It is no magic bullet, but you doing TDD is better at programming than you not doing it. However, a significant barrier to adoption of the practice is the (justifiable) perception that you will go slower when you start doing it. This is true in the same way that sloppy musicians get relatively proficient being sloppy, but if they really want to excel, they have to slow down, perfect their technique, and gradually speed things back up.

My point here is not to proselytize for TDD. I’m going to make the a priori assumption that testing is better than not testing and TDD is better than not TDD and leave it at that. My point here is that making TDD faster and easier would grease the skids for its broader adoption and continue practice.

My Process with MS Test

I’ve been unit testing for a long time, testing with TDD for a while, and also using CodeRush for a while. CodeRush automates all sorts of refactorings and generally makes development in Visual Studio quicker and more keyboard driven. As much as I love it and stump for it, though, I had never gotten around to using its test runner until recently.

I saw the little test tube icons next to my tests and figured, “why bother with that when I can just hit Ctrl-R, T to execute my MS Test tests in scope?” But a couple of weeks ago, I tried it on a lark, and I decided to give it a fair shake. I’m very glad that I did. The differences are subtle, but powerful, and I find myself a more efficient TDD practitioner for it.

Here is a screenshot of what I would do with the Visual Studio test runner:

I would write a unit test, and hit Ctrl-R, T, and the Visual Studio test results window would pop up at the bottom of my screen (I auto-hide almost everything because I like a lot of real estate for looking at code). For the first run, the screenshot there would be replaced with a single failing test. Then, I would note the failure, open the class and make changes, switch back to the unit test file, and run all of the tests in the class, producing the screenshot above. When I saw that this had worked, I would go back to the class and look either to refactor or for another failing test to add for fleshing out my class further.

So, to recap, write a test, Ctrl-R, T, pop-up window, dismiss window, switch to another file, make changes, switch back, Ctrl-R, T, pop-up window, dismiss pop-up window. I’m very good at this in a way that one tends to be good at things through large amounts of practice, so it didn’t seem inefficient.

If you look at the test results window too, it’s often awkward to find your tests if you run more than just a few. They appear in no particular order, so seeing particular ones involves sorting by one of the columns in the class. There tends to be a lot of noise this way.

Speeding It Up With CodeRush

Now, I have a new process, made possible by the way CodeRush shows test pass/fail:

Notice the little green checks on the left. In my new process, I write a test, hit Ctrl-T, R, and note the failure. I then switch to the other class and make it pass, at which time, I hit Ctrl-T, L (which I have bound to “repeat last test run”) before going back to the test class. As that runs, I switch to the test class and see that my test now passed (I can also easily run all tests in the file if I choose). Now, it’s time to write my next test.

So, to recap here, it’s write a test, Ctrl-T, R, switch to another file, make changes, Ctrl-T, L, switch back. I’ve completely eliminated dealing with a pop-up window and created a situation where my tests are running as I’m switching between windows (multi-tasking at its finest).

In terms of observing results, this has a distinct advantage as well. Instead of the results viewer, I can see green next to the actual code. That’s pretty powerful because it says “this code is good” rather than “this entry is good, and if you double click it, you can see which code is good”. And, if you want to see a broader view than the tests, you can see green next to the class and the namespace. Or, you can launch the CodeRush test runner and see a hierarchical view that you can drill into, rather than a list that you can awkwardly sort or filter.

Does This Really Matter?

Is shaving off a this small an amount of time worth it? I think so. I run tests dozens, if not hundreds of times per day. And, as any good programmer knows, shaving time off of your most commonly executed tasks is at the heart of optimizing a process. And, if we can shave time off of a process that people, for some reason, view as a luxury rather than a mandate, perhaps we can remove a barrier to adopting what should be considered a best practice — heck, a prerequisite for being considered a professional, as Uncle Bob would tell you.

By

DXCore Plugin Part 3

In a previous post, I mentioned my ongoing progress with a DX Core plugin. I’d been using it successfully for some time now but had to dust it off again last week. Another convention that I’m required to abide for consistency sake but don’t much care for is explicit typing of my variables.

That is, I prefer declarations like:

var myTypeWithLongName = new TypeWithALongName();

rather than:

TypeWithALongName myTypeWithLongName = new TypeWithALongName();

I personally find the second to be needlessly syntactically noisy, and I don’t really see any benefit since the implicit (var) typing preserves strong typing and even causes the resulting type to show up in Intellisense. But, when in Rome…

Compared to my previous setup, this was relatively straightforward. Initially, I thought it would be even simpler than it turned out to be, since Code Rush itself supports the explicit/implicit conversion as an individual refactoring. I figured I’d just have to iterate through the file and call something like “Some.CodeRush.Namespace.MakeExplicit(myVariable)”.

Well, it didn’t turn out to be that easy (at least not that I found), but it did turn out to be relatively simple. Eliding the part about actually finding the variables, I was able to accomplish what I wanted with this method:

/// Convert the local variable declaration to FSG's standard of explicit typing
/// Variable object to convert
/// A string with the new type declaration
public string ConvertDeclaration(Variable variable)
{
    var myImplicit = variable as ImplicitVariable;
    string myString = variable.MemberType;
    if (myImplicit != null)
    {
        var myObjectCreation = myImplicit.Expression as ObjectCreationExpression;
        if (myObjectCreation != null && myObjectCreation.ObjectType != null)
        {
            myString = myObjectCreation.ObjectType.ToString();
        }

        var myArrayCreate = myImplicit.Expression as ArrayCreateExpression;
        if (myArrayCreate != null)
        {
            myString = myArrayCreate.BaseType.ToString();
        }
    }
    return myString;
}

This isn’t actully my finished ‘production’ code, but it’s easier to display in this format. I didn’t like the size of that method, so I created an abstract ExplicitExpressionConverter and gave it inheritors ObjectCreationExpressionConverter and ArrayCreateExpressionConverter, using a factory method creation pattern to figure out which to create, based on the variable. But while that abstraction makes the code cleaner, it makes the logic harder to document in a blog post.

So, anyway, the idea here is that you need to convert the variable you find into the appropriate type of expression. An object creation expression is standard allocation, and the array create expression is a special case. There are other special cases as well, such as null coalescing (var myFoo = someClass.GetAFoo() ?? new Foo()) and method call expression (var myFoo = someClass.GetAFoo()), but I have not yet figured out how to obtain the return type from those expressions. I’ll probably add a part 4 if/when I do.

By

DXCore Plugin Part 2

In the previous post on this subject, I created a basic DXCore plugin that, admittedly, had some warts. I started using it and discovered that there were subtle issues. If you’ll recall, the purpose of this was to create a plugin that would convert some simple stylistic elements between my preferences and those of a group that I work with. I noticed two things that caused me to revisit and correct my implementation: (1) if I did more than one operation in succession, things seemed to get confused and garbled in terms of variable naming; and (2) I was not getting renames in all scopes.

The main problem that I was having was the result of calling LanguageElement.Document.SetText(). Now, I don’t really understand the pertinent details of this because the API is a black box to me, but I believe the gist of it, based on experience and poking around blog posts, is that calling this explicitly causes the document to get out of sync if you chain renames together.

For instance, take the code:

void Foo()
{
    string myString = "asdf";
    int myInt = myString.Length();
}

The way that DXCore’s Document API appears to process this is with a concept called “NameRange.” That is, there are various ways you can use LanguageElements and other things in that inheritance tree to get a particular token in the source file: the string type, the Foo method signature, the “myString” variable, etc. When you actually want to change something name-wise, you need to find all references to your token and do an operation that essentially takes the parameters (SourceRange currentRange, string new text). In this fashion, you might call YourClass.Document.SetText(myStringDeclaration.NameRange, “newMyString”);

Assuming that you’ve rounded up the proper LanguageElement that corresponds to the declaration of the variable “myString”, this tells DXCore that you want to change the text from “myString” to “myNewString”. Conceptually, what you’re changing is represented by some int parameters that I believe correspond to row and column in the file, ala a 2D array. So, if you make a series of sequential changes to “myString” (first lengthen it, then shorten it, then lengthen it again) strange stuff starts to happen. I think this is the result of the actual allocated space for this token getting out of whack with what you’re setting it to. It sometimes starts to gobble up characters after the declaration like Windows when you hit the “insert” key without realizing it. I was winding up with stuff like “string myStringingingingd”;”

So, what I found as a workable fix to this problem was to use a FileChangeCollection object to aggregate the changes I wanted to make as I went, rather than making them all at once. FileChangeCollection takes a series of FileChange objects which want to know the path of the class, the range of the proposed change target, and the new value. I aggregated all of my changes in this collection and then flushed them at the end with CodeRush.File.ChangeFile(_targetClass.GetSourceFile(), _collection); After doing that, I cleared the collection so that my class can reuse it.

This cleared up the issue of inconsistent weirdness in naming. Now I can convert back and forth as many times as I please or run the same conversion over and over again and then run the other one, and atomicity of the standards application is perceived. If I run “convert to theirs, convert to theirs, convert to theirs, convert to mine,” the code ends up retaining my standards perfectly, regardless of whose it started with. This is due both to me getting DXCore right (at least as far as my testing so far proves) and also to my implementation being consistent and unit-tested. However, confidence that I have the DXCore implementation right now allows me in the future to know that, if things are wacky, it’s because I’m not implementing string manipulation and business rules correctly.

The second issue was that some variables weren’t getting renamed properly, depending on the scope. Things that were in methods were fine, but anything in nested Linq queries or loops or what-have-you were failing. I had previously made a change that I thought took care of this, but it turns out that I just pushed the problem down a level or two.

This I ultimately solved with some recursion. My conversion functions take an (element, targetScope) tuple, and they add FileChanges for all elements in the target scope. They then call the same function for all scoped children of targetScope with (element, targetScopeChild). If you think of the source as a tree with the root being your scope, intermediate nodes being scopes with children, and leaves being things containing no nested scoping language elements, this walks your source code as if it were a tree, finding and renaming everything.

Here is an example of my recursive function for adding changes to a class field “_collection”, which corresponds to a FileChangeCollection (no worries, I’m renaming that field to something more descriptive right after I finish this post 😀 )

/// Abstraction for converting all elements in a target scope
/// Element whose name we want to convert
/// Target scope in which to do the conversion
private void RecursiveAggregateLocalChanges(LanguageElement local, LanguageElement target)
{
    VerifyOrThrow();

    foreach (IElement myElement in local.FindAllReferences(target))
    {
        var myFieldInstance = CodeRush.Source.GetLanguageElement(myElement);
        if (myFieldInstance != null)
        {
            AddFileChange(_targetClass.GetFullPath(),
            myFieldInstance.NameRange, _converter.ConvertLocal(local.Name));
        }
    }

    foreach (LanguageElement myElement in target.Nodes)
    {
        RecursiveAggregateLocalChanges(local, myElement);
    }
}

You want to preserve “local” as a recursion invariant, since this method is called by the method that handles cycling through all local variables in the file and invoking the recursion to change their names. That is, the root of the recursion is given a single local variable to change as well as the target scope of the method it resides in. From here, you change the local everywhere it appears within the method, then you get all of the methods child scopes and do the same on those. You keep recursing until you run out of nested scopes, falling out of the recursion at this point having done your adds.

It does not matter whether you recurse first or last because CodeRush keeps track of the original element regardless and even if it didn’t, you’re only aggregating eventual changes here – not making them as you go – so you don’t wind up losing the root.

Hopefully this continues to be somewhat helpful. I know the DXCore API documentation is in the works, so this is truly the diary of someone who is just tinkering and reverse engineering (with a bit of help from piecing together things on other blogs) to get something done. I’m hardly an expert, but I’m more of one than I was when I started this project, and I find that the most helpful documentation is often that made by someone undertaking a process for the first time because, unlike an expert, all the weird little caveats and gotchas are on display since they don’t know the work-arounds by heart.

I will also update the source code here in a moment, so it’ll be fresh with my latest changes.