DaedTech

Stories about Software

By

Please Don’t Recycle Local Variables

I think there’s a lot of value to the conservation angle of the green movement. In general, it’s a matter of efficiency–if you can heat/light/whatever your house with the same quality of life, using less energy and fewer resources, that’s a win for everyone. This applies to a whole lot of things beyond just eco-concerns, however. Conserving heat when you’re cold, conserving energy when you’re running a marathon, conserving your dollars when making a budget–all good ideas. Cut down, conserve, reuse when you can.

Recycle

Except please don’t do it with your local variables. For example:

public void DoSomeStuff()
{
    int count = 0;

    foreach (var customer in Customers)
    {
        DoSomeStuffToCustomer(customer);
        if (customer.DidTheRightStuffHappen)
            count++;
    }

    Console.WriteLine(count);
    count = 0;

    foreach (var machine in Machines)
    {
        DoSomeStuffToMachine(machine);
        if (machine.DidTheRightStuffHappen)
            count++;
    }

    Console.WriteLine(count);
}

Here, we initialize a local variable, count, and use it to keep track of the results of some processing of customers. When we’re done, we reset count and use it to keep track of the apparently unrelated concept of machines. What I’m saying is that there shouldn’t be just one count, but rather customerCount and machineCount.

Does this seem like nitpicking? You could certainly make that argument, but this code is not going to age well. First of all, this method should clearly be two methods, so we’re starting right off the bat with a bit of technical debt. It would be cleaner if each loop had its own method.

But an interesting thing happens if we use the refactoring tools to try to do that–the refactoring tool wants return values or input parameters. Yikes, that was unexpected, so we just move on. Later, when the time comes to iterate over movies, we see that there’s a ‘design pattern’ in place, so we modify the code to look like this:

public void DoSomeStuff()
{
    int count = 0;

    foreach (var customer in Customers)
    {
        DoSomeStuffToCustomer(customer);
        if (customer.DidTheRightStuffHappen)
            count++;
    }

    Console.WriteLine(count);
    count = 0;

    foreach (var machine in Machines)
    {
        DoSomeStuffToMachine(machine);
        if (machine.DidTheRightStuffHappen)
            count++;
    }

    Console.WriteLine(count);
    count = 0;

    foreach (var movie in Movies)
    {
        DoSomeStuffToMovie(movie);
        if (movie.DidTheRightStuffHappen)
            count++;
    }

    Console.WriteLine(count);
}

Now this thing should really be split up, so we start selecting parts of it to see what we can refactor. Ew, now we’re getting ref parameters to boot. This thing is getting even more painful to try to refactor, and we’re in a hurry, so no time for that. And to make matters worse, if you add in a few other aggregator variables this way, you’ll start to have all kinds of barriers in place when you want to pull this thing apart, such as crazy sets of out parameters. I’ve posted before about how I feel about ref and out.

All of this mounting technical debt could easily be avoided by giving each loop its own count variable. Having them recycle the same one creates a compile-time dependency of what’s going on in each loop with what happened in the loop before, even though there are no other similar dependencies in evidence. In other words, recycling this local variable is the only thing that’s creating a coupling in your code–there’s no logical reason to do it.

This is the height of procedural programming and baking in temporal dependencies that I cautioned you to avoid here. It’s a completely useless dependency that will inhibit refactoring and dirty up your code in a hurry. It may not seem like much yet, but this will be a huge pain point later as the lines of code in this method balloon from the dozens to the hundreds, and you rely heavily on automated tools to help with cleanup. Flag variables used over and over in sequence throughout a method are like pebbles in your shoe when you’re trying to refactor.

So my advice is to avoid this practice completely. There’s really no advantage to coding this way and the potential downside is enormous.

By

Reverential Practice

During the 1950s, a then-unknown journalist named Hunter Thompson briefly held a job working for Time Magazine. During his tenure there, he used a typewriter to type the text, verbatim, of novels by Ernest Hemingway and F. Scott Fitzgerald in order to feel what it was like to write a novel in the literary voice of those great authors. Thompson would go on to write many articles and some books of his own, including the popular Fear and Loathing in Las Vegas, which was eventually made into a movie starring Johnny Depp and Benicio Del Toro. He is also the inventor of a wild, manic style of ‘reporting’ known as Gonzo Journalism in which the author is the story as much as he reports on the story.

HunterThompson

If you type out A Farewell to Arms or The Great Gatsby, I think it’s pretty unlikely that you’ll be the next Hemingway, Fitzgerald, or Thompson. I mean, no offense, but the odds are against that. But what you must have is an incredible amount of appreciation for your craft as an author and an intense desire to carefully study and learn the habits of success and ingenuity. And while you may not strike the big time, I have a hard time imagining that you’ll be worse off for doing so.

In our line of work as programmers, there’s a very important difference between writing the code for an application and writing a novel. A novel is intended to be utterly unique and creative, and it serves as its own end. A program is intended to get the job done and uniqueness is neither necessary nor particularly desirable. (It’s not undesirable–it just doesn’t matter). But in both cases, someone interested in getting inside the mind of success could do a full-on emulation of a renowned practitioner. As a programmer, you could do your best Hunter Thompson and bang out an early version of the Linux kernel, emulating Linus Torvalds and his success at crafting a product.

There are certainly barriers to doing this that don’t exist when it comes to novels. A sizable chunk of software is proprietary, so you can’t see the code to emulate it. The code, language, libraries and platform might be so hopelessly dated that finding hardware on which to run it proves interesting. And if you’re on the cusp of being fired for misappropriating company time, it probably isn’t advisable. But still, imagine what that would be like…

I call this “reverential practice,” though it isn’t practice in the true sense. It’s more of an homage. If you look up practice in the dictionary (the definition we’re using here anyway), you’ll see that it requires repetition for skill acquisition. This clearly isn’t that, since you’d be doing it only once. I say “reverential practice” as a nod and distinction from a term I like that pops up in software blogs and talks: “deliberate practice.” I like the term “deliberate practice” because it distinguishes between doing your job and honing your skills via drilling and focusing deliberately on areas that need improvement. (The term does skew a little heavy toward a borderline weird equation with software development as performance art, but who am I to begrudge people literary and dramatic devices?) And now, I like the term “reverential practice” because it invokes the same connotation of specific attempts to improve, but via unusual amounts of respect for the approach another has taken.

I’m not suggesting that you go out and do this, but it would certainly be an interesting experiment. If you did it, what would you learn from it? Can you do it on a more micro-level in which you just ask someone prominent in the field for a little application they’ve written and then write it yourself? Would it work with just the finished product, or would it be better to see a screencast of them coding and type along with them, extracting methods and deleting things that maybe weren’t right after all? And, most importantly, would it help you improve at all, basking in the reflected successes of another, or would it be a pure act of quasi-religious Programmer Work Ethic?

I don’t know, and there’s a pretty good chance that I’ll never know. But hopefully that’s some interesting food for thought on this Friday. Cheers!

By

How We Get Coding Standards Wrong

The other day, I sat in on a meeting where a large-ish group was discussing “standards” for their particular area of software development. I have the word standards in quotes because, by design, there wasn’t a clear definition as to what sorts of standards they would be; it was an open-ended exercise. The standard could cover anything from variable casing to development practices and principles to holistic approaches. When asked for my input, I was sort of bemused by the process, and I said that I didn’t really have much in the way of an answer. I declined to elaborate much more on that since I wasn’t interested in derailing the meeting in any way, but it did get me to thinking about why the exercise seemed sort of futile to me.

I generally have a natural leeriness when it comes to coding and development standards and especially activities designed to flesh those out, and in this post I’d like to explore why. It isn’t that I don’t believe standards should exist or that I believe they aren’t important. It’s just that I think we frequently miss the point and create standards out of some sense that it’s The Right Thing, and thus create standards that are pointless or even detrimental.

Standards by Committee Anti-Pattern

One problem with defining standards in a group setting is that any group containing some socially savvy people is going to gravitate toward diplomacy. Contentious and arbitrary subjects (so-called “religious wars”) like camel case versus Pascal case or where the bracket after a function goes will be avoided in favor or things upon which a consensus may be reached. But think about what’s actually happening–everyone’s agreeing that the things that everyone already does should be standardized. This is a fairly vacuous exercise in bureaucracy, useful only in the hypothetical realm where a new person comes on board and happens to disagree with something upon which twenty others agree.

People doing this are solving a problem that doesn’t exist: “how do we make sure everyone does this the same way when everyone’s currently doing it the same way?” It also tends to favor documenting current process rather than thinking critically about ideal process.

Let’s capture all of the stuff that we all do and write it down. Okay, so, coding standards. When working on a .NET project, first drive to the office. Then, have your keycard ready to get in the building. Next, enter the building…

Obviously this is silly, but hopefully the point hits home. The simple fact that you do something or that everyone in the group does something doesn’t mean that it’s worth capturing as trainable knowledge and enforcing on the group. And yet this is a direction I frequently see groups take as they get into a groove of “yes, and” when discussing standards. It can just turn into “let’s make a list of everything we do.”

Pointless Homogeneity

The concept of capturing the intersection of everyone’s approach and coding style dovetails into another problem with groups hashing out standards: a group-think bias. Slightly different from the notion that everything common should be documented, this is the notion that everything should be common. For instance, I once worked in a shop where developers were all mandated to use the same diff tool. I’m not kidding. If anyone bothered with a justification for this, I don’t recall what it was, other than some nod to pointless standards.

CookieCutter

You can take this pretty far. Imagine demands that you use the same syntax highlighting colors as your peers or that you keep your file system organized in the same way as everyone else. What does this have to do with the code you’re producing? Who knows…

It might seem like the kind of thing where you should just indulge the the harmless control freak driving it or the group that dreams it up as a unit, but this runs the risk of birthing a toxic culture. With everything, however inconsequential, homogenized, there is no room for creative thinkers to innovate with new approaches.

Make-Work Tasks

Another risk you run when coming up with standards is to create so-called standards that amount to codifying and mandating busy-work. I’ve made my evolving opinion of comments in code quite clear on a few occasions, and I consider them to be an excellent example. When you make “comment every method” a standard, you’re standardizing the procedure (mindlessly adding comments) and not the goal (clarity and communication).

There are plenty of other examples one might dream up. The silly mandate of “sort and organize usings” that I blogged about some time back comes to mind. This is another example of standardizing pointless make-work tasks that provide no substantive benefit. In all cases, the problem is that you’re not only asking developers to engage in brainless busy-work–you’re codifying it as an official mandate.

Getting Too Specific

Another source of issues that I’ve seen in the establishment of standards is a tendency to get too specific. “What sort of convention should we use when we declare a generic interface below an enumeration inside of a nested class?” Really? Does that come up often enough that it’s important for everyone to get on the same page about how to approach it?

I recognize the human desire for set closure; we don’t like it when a dresser is missing a drawer or when we haven’t collected the whole set, but sometimes you’ve just got to let it go. We’re not the IRS–it’s going to be alright if there are contingencies that we haven’t covered and oddball loopholes that we haven’t addressed.

Missing the Point

For me, this is the biggest one. Usually standards discussions are about superficial programming concerns rather than substantive ones, and that’s unfortunate. It is the aforementioned camel vs Pascal case wars or whether to put brackets and which kinds to use. To var or not to var? Should constants be all caps? If an interface is in a forest and doesn’t have an “I” in front of its name, is it still an interface?

I understand the benefit of consistency in naming, casing, and other syntactic considerations. I really do, in spite of my tendency to be dismissive and iconoclast on this front when discussing them. But, first off, let’s not pretend that there really is a right way with these things–there’s just the way that you’re used to doing them. And, more importantly, let’s not pretend that this is really all that important in the grand scheme of things.

We use consistent casing and naming so that a reader of the code can tell at a glance whether something is a field or a local variable or whether something is a method or a property or a constant. It’s really about promoting readability, which, in turn, is about maximizing maintainability. But you know what’s much harder on maintainability than Jones’s great constant casing blunder of 2010 where he forgot to use ALL CAPS? Writing bad code.

If you’re banging out behemoth methods with control statements eight deep, all of the camel case in the world isn’t going to make your code readable. A standard mandating that all such methods be prepended with “yuck” might help, but the real thing that you need is some standards about writing clean code. Keeping methods and classes small and focused, principles like DRY and SOLID, and other good design principles are much more important standards to which to aspire, but they’re often less concrete and harder to enforce. It’s much easier and more rote for a code reviewer to look for casing issues or missing comments than to analyze code for good software practice and object-oriented design. The latter is often less cut-and-dry and more a matter of degrees, so it’s frequently glossed over in favor of more tangible, simple things. Problem is, those tangible, simple things really aren’t all that important to the health of your applications and projects over the long haul.

It’s All Just Premature Optimization

The common thread here is that all of these standards anti-patterns result from solving non-existent problems. If you have some collection of half-baked standards at your company that go on for some pages and then say, “after that, follow the Microsoft standards,” imagine how they came about. I bet a few of the group’s original engineers or most senior people had a conversation that went something like, “We should probably have some standards.” “Yeah, I guess… but why now?” “I dunno… I think it’s, like, what you’re supposed to do.”

I suspect that if you did a survey, a lot more standards documents have started with conversations like that than with conversations about hours lost to maintenance and difficulty reading code. They are born out of cargo-cult practice rather than a necessity to solve some problem. Philosophically, they start as solutions in search of a problem rather than solutions to actual problems.

The situation is complicated by the fact that adoption of certain standards may have solved real problems in the past for developers on the team, and they’re simply doing the smart thing and carrying their knowledge forward. The trouble is that not all projects face the same problems. When discussing approaches, start with abstract and general abiding principles like SOLID and DRY and take it from there. If half of your team uses camel case and the other half Pascal and it’s causing communication and maintenance difficulties, flip a coin and set a standard. Repeat as necessary with other standards to keep the project moving and humming. But don’t make them up just for the sake of doing so. You wouldn’t start writing random code that may never solve any actual problem, so why create a standard that way?

By

Optimizing Proto-Geeks for Business

In a recent post, I talked about the importance of having Proto-Geeks in your software group rather than Loafers and the toxic impact of too many Loafers in the group. If you’ll recall, Proto-Geeks are automaters (in other words, developers) that are enthusiastic about new technologies, and Loafers are ones that don’t much care for it and have a purely utilitarian desire to automate. Loafers are interested in automating only to maximize their efficiency so that their benefit-to-effort ratio is maximized.

One thing that I mentioned only briefly in the last post was the idea that Loafers, in spite of having no love for new technologies or ideas, would have the best business sense. Where Proto-Geeks might go bounding off on unprofitable digressions simply for the sake of solving technological puzzles, Loafers will keep their eyes on the prize. I dismissed this as an issue by saying that Loafers were locally maximizing and self-interested rather than being concerned with the best interests of the group as a whole. In retrospect, I think that this is a bit of an oversimplification of both the motivations of the Loafer and the best way to navigate the balance between interest in new technology and conservative business sense.

Let me be clear: the best approach is to have as few Loafers as possible. So the real question is how to reign in Proto-Geeks and get them to have business sense. And I think the answer is a two-pronged one: use gamification strategies and do away with the tired notion that programmers don’t “know business” and that you stick them in a room, slide pizza under the door, and get software in return at some point.

PizzaUnderTheDoor

No More Project Managers

Typically, organizations create the role of “Project Manager” and “Software Developer” and, along with these roles, create the false dichotomy that the former “does business” and the latter “does technology.” Or, in the parlance from these posts, the former is a Fanboy and the latter is a Proto-Geek or a Loafer. There’s an interesting relationship between the project manager Fanboys and the Loafers, as compared with the PMs and the Proto-Geeks.

Specifically, PMs tend to like and identify with Loafers and be annoyed by Proto-Geeks. Why would Fanyboy identify with Loafer? That doesn’t seem to make a lot of sense, given that they’re in opposite quadrants. Fanboys root for technology while not being particularly adept or inclined toward automation on their own, while Loafers are leery of technology but reasonably good at automating things. So, what common ground do these archetypes find? Well, they meet in two ways:

  1. Because of the division of labor, PMs have to root for automation without doing it themselves, which means rooting for and depending on automaters.
  2. PMs that got their start in programming used to be or would have been Loafers.

If you consider the canonical question posed to Padiwan developers fresh out of school at an annual review, “do you want to be on the technical track to architect or the project management track,” you’re basically asking “are you a Proto-Geek or a Loafer?” If that seems like a harsh parallel to draw, think about what each response means: “I love technology and want to rise to the top of my craft,” versus, “programming is really just a means to a different end for me.” These are cut-and-dry responses for their respective quadrants.

So while you have one breed of Loafer that loosely corresponds to Lifer and just wants to bang out code and collect a paycheck, you have another breed that wants to bang out code just long enough to get a promotion and trade in Visual Studio and Eclipse for MS Project and a 120% booked Outlook calendar. Once that happens, it’s easy to trade in the reluctant automater card for enthusiastic tech Fanboy.

But a tension emerges from this dynamic. On the one hand, you have the people developing along the technical track, getting ahead because of their expertise in the craft for which everyone in the group is hired. On the other hand, you have a group that tends to underperform relatively in the same role, looking opportunistically for a less technical field of expertise and authority. The (incredibly cynical) logical conclusion of this dynamic is the “Dilbert Principle,” which holds that the least competent programmers will be promoted to project management where they can’t do as much damage to the software, consumed as they are with Gantt Charts and Six Sigma certifications and whatnot.

However cynical the Dilbert Principle might be and however harsh the “PM as disinterested programmer” characterization might be, there’s no altering the fact that a very real tension is born between the “tech track” and the “project management” track. This is exacerbated by the fact that “project manager” has “manager” in the title whereas “senior software engineer” or “architect” does not. Seem silly? Ask yourself what life might be like if Project Manager was renamed to the (more accurate) title “Status Reporter/Project Planner,” as observed on the Half Sigma blog:

It is often suggested that the most natural next move “up” is into project management. But the first problem with this situation is that project management sucks too. It doesn’t even deserve to have the word “management” in the title, because project management is akin to management as Naugahyde leather is to leather. Project planner and status reporter is the more correct title for this job. Once you take the word “manager” out of title, it loses a lot of its luster, doesn’t it? Everyone wants to be a manager, but few would want to be a project planner and I daresay no one would want to be a status reporter. Status reporting is generally the most hated activity of anyone who endeavors to do real work.

There’s little doubt that project managers are often de facto developer managers–generally not in the org chart, but certainly in terms of who is allowed by the organization to boss whom around. And so there tends to be a very real tension between the top technical talent and the top business-savvy talent. Ace Proto-Geeks resent being ordered around by their perceived inferiors who couldn’t hack it doing “real work” and ace Project Managers hate having to deal with prima donna programmers too stupid in the ways of office politics and business sense to put themselves on track toward real organizational power.

I submit that this is an entirely unnecessary wall to build. You can eliminate it by eliminating the “project manager” role from software groups. To take care of the “project planner” role responsible for perpetually inaccurate Gantt charts and other quixotic artifacts, just go agile. Involve the customer stakeholders directly with the development group in planning and prioritizing features. Really. Your devs are knowledge workers and bright people–you don’t need an entire role to run interference between them and customers. As for the “status reporter” role, come on. It’s 2013. If you don’t have an ALM tool that allows a C-level executive to pull up a snazzy, progress reporting chart automatically, you’re doin’ it wrong.

So the first step is to stop hiring Loafers with the specific intent that they’ll conflict with and reign in the Proto-Geeks. Running a business like US Congress isn’t a good idea. Split the meta-project/housekeeping tasks among the developers, rather than creating a clashing, non-technical position on the team for this purpose and pumping up its title in terms of office politics.

Gamify

So you’ve eliminated the Loafers, but now you need to get your Proto-Geeks to think about the bottom line once in a while. You can round them up and lecture them all you like, and if you take a hardline kind of approach, it might work after a fashion. But I really don’t recommend this because (1) Proto-Geeks are knowledge workers that are very good at gaming metrics, and (2) real talent is likely to leave. You need their buy-in and this requires you to partner with them instead of treating them like assembly line workers.

But the thing that runs a Proto-Geek’s motor is automating things using cool tools and technologies. At their core, they are puzzle solvers and tinkerers and they enjoy collecting a paycheck for figuring out how to get their geeky toys and playthings to solve problems in your organization’s problem domain. You need some way to collect and analyze the performance of machines and workers on your manufacturing floor? Dude, they can totally do that with the latest version of ASP MVC, and they can even use this new open-source site profiler to analyze the performance in realtime, and… what’s that? You only need the site and you don’t want them to get carried away with the profiler and whatever else they were about to say?

Well, just present this need to them as another problem within your domain and give them another puzzle to solve. Challenge them to build a site with as few dependencies as possible or tell them that they get to pick two and only two new toys for the next project or tell them that they can have as many toys as they want, but if they don’t make their deadlines, they don’t get any for the next project.

Another great trend from a business geek perspective is the way that cloud solutions like Azure and Salesforce work, where extra CPU cycles, memory usage, and disk space actually cost small amounts of money in real time. Nothing drives a Proto-Geek to do great things like this level of gamification where he knows–just knows–that after an all-nighter and some Mountain Dew, he can shave $40 per day here and $120 per day there.

These examples are really just off the cuff, so take them with a grain of salt, but the underlying message is important. You don’t need to hire people that are skeptical of any release of VBA after MS Access 2000 or who want to coast through programming at the entry level until they have enough seniority to be project managers in order to have team members focused on making the business profitable. You just need to have innovative and appropriate incentives in place. It’s the Proto-Geek’s job to get excited about technologies and problems and using technologies to solve problems. It’s management’s job to give them incentives that make them want to solve the right problems to ensure profitability.

By

Subtle Naming and Declaration Violations of DRY

It’s pretty likely that, as a software developer that reads blogs about software, you’ve heard of the DRY Principle. In case you haven’t, DRY stands for “Don’t Repeat Yourself,” and the gist of it is that there should only be one source of a given piece of information in a system. You’re most likely to hear about this principle in the context of copy-and-paste programming or other, similar rookie programming mistakes, but it also crops up in more subtle ways. I’d like to address a few of those today, and, more specifically, I’m going to talk about violations of DRY in naming things while coding.

DRY

Type Name in the Instance Name

Since the toppling of the evil Hungarian regime from the world of code notation, people rarely do things like naming arrays of integers “intptr_myArray,” but more subdued forms of this practice still exist. You often see them appear in server-templated markup. For instance, how many codebases contain text box tags with names/ids like “CustomerTextBox?” In my experience, tons.

What’s wrong with this? Well, the same thing that’s wrong with declaring an integer by saying “int customerCountInteger = 6.” In static typing schemes, the compiler can do a fine job of keeping track of data types, and the IDE can do a fine job, at any point, of showing you what type something is. Neither of these things nor anyone maintaining your code needs any help in identifying what the type of the thing in question is. So, you’ve included redundant information to no benefit.

If it comes time to change the data type, things get even worse. The best case scenario is that maintainers do twice the work, diligently changing the type and name of the variable. The worst case scenario is that they change the type only and the name of the variable now actively lies about what it points to. Save your maintenance programmers headaches and try to avoid this sort of thing. If you’re having trouble telling at a glance what the datatype of something is, download a plugin or a productivity tool for your IDE or even write one yourself. There are plenty of options out there without baking redundant and eventually misleading information into your code.

Inheritance Structure Baked Into Type Names

In situations where object inheritance is used, it can be temping to name types according to where they appear in the hierarchy. For instance, you might define a base class named BaseFoo and then a child of that named SpecificFoo, and a child of that named EvenMoreSpecificFoo. So EvenMoreSpecificFoo : SpecificFoo : BaseFoo. But what happens if during a refactor cycle you decide to break the inheritance hierarchy or rework things a bit? Well, there’s a good chance you’re in the same awkward position as the variable renaming in the last section.

Generally you’ll want inheritance schemes to express “is a” relationships. For instance, you might have Sedan : Car : Vehicle as your grandchild : child : parent relationship. Notice that what you don’t have is SedanCarVehicle : CarVehicle : Vehicle. Why would you? Everyone understands these objects and how they relate to one another. If you find yourself needing to remind yourself and maintainers of that relationship, there’s a good chance that you’d be better off using interfaces and composition rather than inheritance.

Obviously there are some exceptions to this concept. A SixCylinderEngine class might reasonably inherit from Engine and you might have a LoggingFooRetrievalService that does nothing but wrap the FooRetrievalService methods with calls to a logger. But it’s definitely worth maintaining an awareness as to whether you’re giving these things the names that you are because those are the best names and/or the extra coupling is appropriate or whether you’re codifying the relationship into the names to demystify your design.

Explicit Typing in C#

This one may spark a bit of outrage, but there’s no denying that the availability of the var keyword creates a situation where having the code “Foo foo = new Foo()” isn’t DRY. If you practice TDD and find yourself doing a lot of directed or exploratory refactoring, explicit typing becomes a real drag. If I want to generalize some type reference to an interface reference, I have to do it and then track down the compiler errors for its declarations. With implicit typing, I can just generalize and keep going.

I do recognize that this is a matter of opinion when it comes to readability and that some developers are haunted by the variant in VB6 or are reminded of dynamic typing in Javascript, but there’s really no arguing that this is technically a needless redundancy. For the readability concern, my advice would be to focus on writing code where you don’t need the crutch of specific type reminders inline. For the bad memories of other languages concern, I’d suggest trying to be more idiomatic with languages that you use.

Including Namespace in Declarations

A thing I’ve seen done from time to time is fully qualifying types as return values, parameters, or locals. This usually seems to occur when some automating framework or IDE goody does it for speculative disambiguation in scoping. (In other words, it doesn’t know what namespaces you’ll have so it fully qualifies the type during code generation to minimize the chance of potential namespace collisions.) What’s wrong with that? You’re preemptively avoiding naming problems and making your dependencies very obvious (one might say beating readers over the head with them).

Well, one (such as me) might argue that you could avoid namespace collisions just as easily with good naming conventions and organization and without a DRY violation in your code. If you’re fully scoping all of your types every time you use them, you’re repeating that information everywhere in a file that you use the type when just once with an include/using/import statement at the top would suffice. What happens if you have some very oft-used type in your database and you decide to move it up a level of namespace? A lot more pain if you’ve needlessly repeated that information all over the place. Perhaps enough extra pain to make you live with a bad organization rather than trying to fix it.

Does It Matter?

These all probably seem fairly nit-picky, and I wouldn’t really dispute it for any given instance of one or even for the practices themselves across a codebase. But practices like these are death by a thousand cuts to the maintainability of a code base. The more you work on fast feedback loops, tight development cycles, and getting yourself in the flow of programming, the more that you notice little things like these serving as the record skip in the music of your programming.

When NCrunch has just gone green, I’m entering the refactor portion of red-green-refactor, and I decide to change the type of a variable or the relationship between two types, you know what I don’t want to do? Stop my thought process related to reasoning about code, start wondering if the names of things are in sync with their types, and then do awkward find-alls in the solution to check and make sure I’m keeping duplicate information consistent. I don’t want to do that because it’s an unwelcome and needless context shift. It wastes my time and makes me less efficient.

You don’t go fast by typing fast. You don’t go fast, as Uncle Bob often points out, by making a mess (i.e. deciding not to write tests and clean code). You really don’t go fast by duplicating things. You go fast by eliminating all of the noise in all forms that stands between you and managing the domain concepts, business logic, and dependencies in your application. Redundant variable name changes, type swapping, and namespace declaring are all voices that contribute to that noise that you want to eliminate.