DaedTech

Stories about Software

By

Test Driven Development

All In?

It seems to me that most or many treatises on best practices for software engineering in this day and age advocate for Test Driven Development (TDD). I have read about this methodology, both on blogs and in a book that I ordered from Amazon (Test Driven Development By Example). I have put it into practice in varying situations, and I enjoy the clarity that it brings to the development process once you manage to wrap your head around the idea of writing non-compiling, non-functioning tests prior to writing actual code.

That said, it isn’t a process I prefer to follow to the letter. I have tried, and it doesn’t suit the way I do my best development. I have read accounts of people saying that it simply takes some getting used to and that people object to it because they want to “just code” and are less productive downstream in the development process because of this, but I don’t think that’s the case for me. I don’t delude myself into thinking that I’m more efficient in the long run by jumping in and not thinking things through. In fact, quite the opposite is true. I prototype with throw-away code before I start actual development (architecture and broad design notwithstanding–I don’t architect applications by prototyping, I’m referring to adding or reworking a module or feature in an existing, architected application or creating a new small application)

Prototyping and TDD

As far as I can tell, the actual process of TDD is agnostic as to whether or not it is being used in the context of prototyping. One can develop a prototype, a one-off, a feature, or a full-blown production application using TDD. My point in this post is that I don’t find TDD to be helpful or desirable during an exploratory prototyping phase. I am not entirely clear as to whether this precludes me from being a TDD adherent or not, but I’m not concerned about that. I’ve developed and refined a process that seems to work for me and tends to result in a high degree of code coverage with the manual tests that I write.

The problem with testing first while doing exploratory prototyping is that the design tends to shift drastically and unpredictably as it goes along. At the start of the phase, I may not know all or even most of the requirements, and I also don’t know exactly how I want to accomplish my task. What I generally do in this situation is to rig up the simplest possible thing that meets one or a few of the requirements, get feedback on those, and progress from there. As I do this, additional requirements tend to be unearthed, shifting and shaping the direction of the design.

During this prototyping phase, manual and automated refactoring techniques are the most frequently used tools in my toolbox. I routinely move methods to a new class, extract an interface from an existing class, delete a class, etc. I’ve gotten quite good at keeping version control up to date with wildly shifting folder and file structures in the project. Now, if I were to test first in this mode of development, I would spend a lot of time fixing broken tests. I don’t mean that in the sense of the standard anti-unit-testing “it takes too long” complaint, but in the sense that a disproportionate number of tests I would write early on would be discarded. And while the point of prototyping is to write throw-away code, I don’t see any reason to create more code to throw away than necessary to help you see the right design.

TDD and Prototyping in Harmony

So, here, in a nutshell, is the process that I’ve become comfortable with for new, moderately sized development efforts:

  1. Take whatever requirements I have and start banging out prototype code.
  2. Do the simplest possible thing to satisfy the requirement(s) I’ve prioritized as highest (based on a combination of difficulty, prerequisites and stakeholder preference).
  3. Eliminate duplication, refine, refactor.
  4. Satisfy the next requirement(s) using the same criteria, generalizing specific code, and continuing to refine the design.
  5. Repeat until a rough design is in place
  6. Resist the urge to go any further with this and start thinking of it as production code. (I usually accomplish this by doing the preceding steps completely outside of the group’s version control.)

At this point, I have a semi-functional prototype that can be used for a demo and for conveying the general idea. Knowing when to shift from prototyping to actual development is sort of an intuitive rather than mechanical process, but usually for me, it’s the point at which you could theoretically throw the thing into production and at least argue with eventual users that it does what it’s supposed to. At this point, there is no guarantee that it will be elegant or maintainable, but it more or less works.

From there, I start my actual development in version control. This is when I start to test first. I don’t open up the target solution and dump my prototype files in wholesale or even piecemeal. By this point, I know my design well, and I know its shortcomings and how I’d like to fix and address those. I also generally realized that I’ve given half of my classes names that no longer make sense and that I don’t like the namespace arrangements I’ve set up. It’s like looking at your production code and having a refactoring wish-list, except that you can (and in fact have to) actually do it.

So from here, I follow this basic procedure:

  1. Pick out a class from the prototype (generally starting with the most abstract and moving to the one that depends most on the other ones).
  2. For each public member (method or property), identify how it should behave with valid and invalid inputs and with the object in different states.
  3. Identify behaviors and defaults of the object on instantiation and destruction.
  4. Create a test class in version control and write empty test methods with names that reflect the behaviors identified in the previous steps.
  5. Create a new class in version control and stub in the public properties and methods from the previous steps.
  6. Write the tests, run them, and watch them fail
  7. Go through each test and do the simplest thing to make it pass.
  8. While making the test pass, eliminate duplication/redundancy and refine the design.

This is then repeated for all classes. As a nice ancillary benefit, doing this one class at a time can help you catch dependency cycles (i.e. if it’s really the case that class should A has a reference to class B and vice-versa, so be it, but you’ll be confronted with that and have to make an explicit decision to leave it that way).

Advantages to the Approach

I sort of view this as having my cake and eating it too. That is, I get to code “unencumbered” by testing concerns in the beginning, but later get the benefits of TDD since nothing is actually added to the official code base without a test to accompany it. Here is a quick list of advantages that I see to this process:

  • I get to start coding right away and bang out exploratory code.
  • I’m not writing tests for requirements that are not yet well understood or perhaps not even correct.
  • I’m not testing a class until I’m sure what I want its behavior to be.
  • I separate designing my object interactions from reasoning about individual objects.
  • I get to code already having learned from my initial design mistakes or inefficiencies.
  • No code is put into the official codebase without a test to verify it.
  • There is a relatively low risk of tests being obsolete or incorrect.

Disadvantages

Naturally, there are some drawbacks. Here they are as I see them:

  • Prototyping can wind up taking more time than the alternatives.
  • It’s not always easy to know when to transition from prototype to actual work.
  • Without tests, regressions can occur during prototyping when a violent refactoring sweeps out some of the good with the bad.
  • Without tests, some mistakes in your prototype design might make it into your official version recreation.

I think that all of these can be mitigated and I firmly believe that the advantages outweigh the disadvantages.

When not to do this

Of course, this method of development is not always appropriate. During a defect fixing cycle, I think tried and true TDD works the best. Something is wrong, so write a test for the right thing and modify the code until it happens — there’s no need to prototype. This process is also inappropriate if you have all of your requirements and a well-crafted design in place, or if you’re only making small changes. Generally, I do this when I’m tasked with new feature or project implementation that has a high level design already, and is going to take weeks or months.

By

Technical Presentations and Understanding the Little Things

An Observation

Today I attended a technical presentation on a domain-specific implementation of some software and a deployment process. The subject matter was relevant to my work, and I watched with interest. While the presentation was, to some degree, dominated by discussion from other attendees rather than pure explanation, I followed along as best I could, even during times when the discussion being raised was not relevant to me or something I already understood.

During the portions of the presentation that were explanation, however, I noticed an interesting trend, and when things went off on tangents, I began to philosophically ponder this trend. What I noticed was that most of the explanation was procedurally oriented and that lot of other presentations tended to be like this as well.

This is what we do…

When I say “procedural,” I mean that people often give presentations and explain in this context: “First we do X, then we do Y. Here are A, B, and C, the files that we use, and here are D, E, and F, the files that we output.” This goes on a for a while as someone explains what, in essence, amounts to their daily or periodic routine. Imagine someone explaining how to program: “First, I turn on my computer, then I log in, then I start Ecplise, but first, I have to wait a little while for the IT startup scripts to run. Next…”

Generally speaking, this is boring, but not always. That depends on the charisma of the speaker and the audience’s level of interest in the subject. However, boring or not, these sorts of checklist explanations and presentations are not engaging. They explain the solution to a problem without context, rather than engaging the audience’s brain to come up with solutions along with the presenter and eventually to conclude that the presenter’s approach makes the most sense in context. The result of such a procedural presentation is that you brainlessly come to understand some process without understanding why it’s done, if it’s a good idea, or if it’s even coherent in any larger scheme.

Persuasive Instead of Expository

Remember speech class? I seem to remember that there were three main kinds of speeches: persuasive, expository, and narrative. I also seem to remember that expository speeches tended to suck. I think this holds true in the format of technical presentations in particular–this describes the procedural presentation I mentioned earlier, which basically boils down to the format “A is true, B is true, C is true, D is true.” Perhaps that seems like an oversimplification, but it’s not, really. We’re programmers, right? What is the logical conclusion of any presentation that says “We do X, then we do Y, then we do Z”?

So, I think that we ought to steer for anything but expository when presenting. Narrative is probably the most engaging and persuasive the most effective. The two can also be mixed in presentation. A while back, I watched some of Misko Hevery’s presentations on clean code. One such presentation, in particular, that struck me as effective had to do with singletons (he calls them pathological liars). In this talk (and in the linked post), he told a story about setting up a test for instantiating a credit card object and exercising its “charge()” method only to find out that his credit card had been billed. This improbable story is interesting and it creates a “Really? I gotta hear this!” kind of feeling. Audience members become engaged by the narrative and want to hear what happens next. (And, consequently, there are probably fewer tangents, since unengaged audience members are probably more interested in quibbling over superior knowledge of procedural minutiae.)

Narrative is effective, but it’s limited in the end when it comes to conveying information. At some point, you’re going to need to be expository in that you’re relating facts and knowledge. I think the best approach is a persuasive one, which can involve narration and exposition as well. That is, I like (and try to give) presentations of the following format: tell a story that conveys a problem in need of solution, explain the “why” of the story (i.e. a little more technical detail about what created the problem and the nuts and bolts of it), explain early/misguided attempts at a solution and why they didn’t work, explain the details of my end solution, and persuade the audience that my solution was the way to go.

This is more or less the format that Misko followed and I’ve seen it go well in plenty of other presentations besides. It’s particularly effective in that you engage the audience, demonstrate that you understand there are multiple possible approaches, finally persuade the audience that your approach makes sense, and that they would eventually have figured this out on their own as well because, hey, they’re intelligent people.

Why This Resonates

As the presentation I was watching drifted off topic again via discussion, I started to ponder why I believe this approach is more effective with an audience like developers and engineers. It occurred to me that an explanation of the history of a problem and various approaches to solving it has a strong parallel with how good developers and engineers conduct their business.

Often, as a programmer, you run across someone that doesn’t understand anything outside of the scope of their normal procedure. They have their operating system, IDE, build script, etc, and they follow a procedure of banging out code and seeing what happens. If something changes (OS upgrade, IDE temporarily not working, etc), they file a support ticket. To me, this is the antithesis of what being a programmer is about. Instead of this sort of extreme reactive behavior, I prefer an approach where I don’t automate anything until I understand what it’s doing. First, I build my C++ application from the command line manually, then I construct a makefile, then I move onto Visual C++ with all its options, for instance. By building up in this fashion, I am well prepared for things going wrong. If, instead, all of the building is a black box to someone, they are not prepared. And if you’re using tools on the machine blissfully ignorant as to what they’re doing, can you really say that you know what you’re doing?

However, I would venture a guess that most of the people who are content with the black box approach are not really all that content. It might be that they have other things going on in their lives or a lack of time to get theoretical or a number of other things. Perhaps it’s just that they learn better by explanation as opposed to experimentation, and they would feel stupid asking how building the application they’ve been working on for two years works. Whatever the case may be, I’d imagine that, all things being equal, they would prefer to know how it works under the hood.

So, I think this mode of speech-giving appeals to the widest cross section of programmers. It appeals to the inveterate tinkerers because they always want, nay, demand, to know questions like “why” and “how” instead of just “what.” But it also appeals to the go-along-to-get-along type that is reasonably content with black boxes because, if they’re attending some form of training or talk, they’d rather know the “why” and “how” for once.

So What?

I’d encourage anyone giving a technical talk to keep this in mind. I think that both you’ll feel better about the talk and your audience will be more engaged and benefit more from it. Tell a story. Solve a problem together with the audience. Demonstrate that you’ve seen other ways to approach the issue and that you came to the right decision. Generally speaking, make it a collaborative problem solving exercise during the course of which you guide the audience to the correct solution.

By

Software Craftsmanship and the Art of Software

Context

I’m a member of the LinkedIn group “Software Craftsmanship.” I’m not an active member, but I do like perusing the discussion topics. Recently, I read through this discussion on whether software is more of an “art” or a “craft.” This set me to musing a bit, so I thought I would post about it here.

Is Software a Craft?

A lot of discussion in the group seemed to involve meta-discussion about semantics. “What is a craft?” “What is art?” While this may seem like an academic exercise in splitting hairs, I feel that metaphors for process, software design, and pretty much everything that you might want to do are quite important–so much so that I made an earlier blog post about metaphors for software design in which I discussed this subject at length.

On the idea of whether or not software design and development is a craft, I think there are two operating definitions that potentially cloud the issue. One seems to be the dictionary definition that one might expect, where “craft” is something along the lines of a trade or some occupation that requires skill at something. Using this definition, software design is trivially a craft. There’s not a lot of room for argument there. I think the notion of software as a craft becomes more interesting in the context in which it’s adopted by the Software Craftsmanship movement–the idea of medieval craftsmanship in which there was some notion of apprenticeship being required before the aspiring tradesman could venture out into the world on his own. Perhaps naming the movement “Software Guildsmanship” would be more clear, if a little weird.

Software Guilds

At any rate, the guild metaphor is very good for expressing the feeling that there ought to be some standard met by anyone who wishes to develop software. At first blush, we might think that an undergraduate degree in Computer Science is the standard, but there are still plenty of self-taught programmers and people who attend trade school or else somehow transition into software development. And, furthermore, a computer science degree often gives more of a theoretical background as opposed to teaching would-be programmers the ropes of actual software development. In some very real sense, the CS degree is akin to taking courses in E&M physics to become an electrician. When you get done, you can give an excellent treatise on magnetic flux, but you may have no idea whether it’s really necessary to use conduit when connecting a switch to a junction box.

So, at present, there really is no modern-guild equivalent for software development. They do exist in other professions, though. The aforementioned electricians, lawyers, doctors, and certain types of engineers all belong to the modern equivalent of a guild, whether it’s a union with its political clout or some kind of licensing body (generally the functional equivalent of a union, particularly in the case of lawyers). The principles persist–some required standard for admittance (medical board exams), some notion of standardized pricing of services (union labor), repercussions for not cooperating with the explicit or implicit bylaws of the group (being “disbarred”), etc. The question is whether or not this should apply to software development, and I think that most members of the Software Craftsmanship school of thinking believe that it should.

Me, I’m not so sure. I think the intentions are sound, but the ramifications are potentially odd. Anyone who has looked at a particularly poorly designed piece of software probably, in that moment, would love some standardization of what is acceptable and to have the author of the offending code stripped of some kind of software development credentials. But that sort of overlooks all of the modes in which software is developed.

Lawyers, doctors, electricians, etc. are all sanctioned–“guilded,” if you will–because they provide some kind of external service — they act as agents for individuals or organizations. Software development has a wider range of use. Often we write software and sell it as a product, but sometimes we write software for fun, for friends, as a quick and dirty way to automate something, for posterity (FOSS), or internally for an organization who is both “purchaser” and “vendor” in one swoop. Contrast this with the guilded positions that I’ve been mentioning. People don’t practice law at home to make reading spreadsheets easier. People don’t practice medicine for fun. And some of these guilders could do what they do for free or internally for a company, but even if they did, they would act as an agent for someone who can be legitimately harmed by malpractice. If a pro-bono lawyer messes up, his client can still land in jail. If an incompetent electrician wires up his brother’s apartment, the apartment building can burn down.

With software, it’s a lot murkier. There are many scenarios in which software cannot have any negative ramifications for anyone involved. There is no implied responsibility simply because someone has created some piece of software, whereas there is implied responsibility in the guilded professions.

So Is the Metaphor Valid?

All that said, I think the idea of Software Craftsmanship is a valid one. But I think it’s important to remember that this is an imperfect metaphor. I like the idea of an opt-in software guild (i.e. potentially the Software Craftsmanship movement) in which members agree to practice due diligence with regard to producing good quality software. I even like the idea of apprenticeship, and I think that members of the movement ought to take on ‘apprentices’ (in a strictly voluntary interaction). This group of people ought to be software professionals and aspiring software professionals with the common goal of standardizing good practices and raising the bar of software design in general. But I think a slightly different metaphor is in order. After all, what I’m describing is less like a guild and more like a professional association.

Is Software Development an Art?

To this question, I have to give an opinion of a definite “no.” People like to have their work appreciated. If you spend a long enough time pouring your heart and soul into something, you will want to view it in an artistic context because you’ll want others to appreciate what you’ve done and see beauty in your work. But most forms of art, in the commonly accepted sense, are created for pure aesthetic value. Your software is only art if you never compile it, opting instead to print out the source code, frame it, and hang it over your mantle.

Make no mistake–at its core, software is a tool and a means to an end. You may be particularly efficient in creating these tools, but that doesn’t mean that you’re creating art. You’re automating processes and doing a good job of it. But nobody buys software for aesthetics. The only exception I can think of is software whose purpose is to look pretty or aesthetically appealing, such as video games or fractals. But the art there is in the rendering and not the software plumbing.

To refer to software development as an art, in my opinion, undercuts the value of the activity by implying that the goal is aesthetics rather than utility. We’re not selling people software because they want to own something that has the loosest component coupling they’ve ever seen or because it has no global state–we’re selling people software because they want to do their taxes, check their email, or look for reruns of sitcoms on the internet.

Software development is an art only inasmuch as everything is an art. If I hear developers talking about being good at their art, this to me is akin to mechanics talking about the ‘art’ of fixing cars or engineers talking about the ‘art’ of building planes. It’s only art in the vernacular sense of “This person is so much better at this activity than other people that I’m going to use the term ‘art’ to convey my appreciation.” I would argue that there is a certain danger in taking it any further than that. Specifically, the software designer who believes that his work is art is likely to lose sight of the fact that, without a need, there is no use for his work, and the aesthetics of filling that need do not trump the actual filling of the need. Nestcape might have been destined to be the most beautifully crafted piece of software in the history of mankind, but Internet Explorer still commands the market because its developers were interested in making money, not selling tickets to the grand unveiling.

By

Tribal Knowledge

In my last post, I alluded briefly to the concept of “tribal knowledge” when developing software. I’ve heard this term defined in various contexts, but for the sake of discussion here, I’m going to define this as knowledge about how to accomplish a task that is not self-evident or necessarily intuitive. So, for instance, let’s say that you have a front door with a deadbolt lock. If you hand someone else the key and they are able to unlock your door, the process of unlocking the door would be general knowledge. If, however, your door is “tricky” and they have to jiggle the key twice and then turn it to the left before turning it back right, these extra steps are “tribal knowledge.” There is no reasonable way to know to do this except by someone telling you to do it.

Today, I’m going to argue that eliminating as much tribal knowledge as possible from a software development process is not just desirable, but critically important.

Background

Often times when you join a new team, you’ll quickly become aware of who the “heroes” or “go-to people” are. They might well have interviewed you, and, if not, you’ll quickly be referred to or trained by them. This creates a kind of professor/pupil feeling, conferring a certain status on our department hero. As such, in the natural realpolitik of group dynamics, being the keeper of the tribal knowledge is a good thing.

As a new team member, the opposite tends to be true and your lack of tribal knowledge is a bad thing. Perhaps you go to check in your first bit of code and it reads something like this:

class SomeClass
{
    Foo _foo;
    Bar _bar;
    Baz _baz;
    ReBar _reBar;
    public SomeClass()
    {
        var _foo = GodClass.Instance.MakeAFoo();
        var _bar = SecondGodClass.Instance.MakeABar(myFoo);
        var _baz = SecondGodClass.Instance.MakeABaz(myFoo);
        var _reBar = GodClass.Instance.ReDoBar(myFoo);
    }
}

You ask for a code review, and a department hero takes one look at your code and freaks out at you. “You can never call MakeABar before MakeABaz!!! If you do that, the application will crash, the computer will probably turn off, and you might just flood the Eastern Seaboard!”

Dully alarmed, you make a note of this and underline it, vowing never to create Bars before Bazes. You’ve been humbled, and you don’t know why. Thank goodness the Keeper of the Tribal Knowledge was there to prevent a disaster. Maybe someday you’ll be the Keeper of the Tribal Knowledge.

The Problem

Forgetting any notions of politics, seniority, or the tendency for newer or entry level people to make mistakes, the thing that should jump out at you is, “How was the new guy supposed to know that?” With the information available, the answer is that he wasn’t. Of course, it’s possible that all of those methods are heavily commented or that some special documentation exists, but nevertheless, there is nothing intuitive about this interface or the temporal coupling that apparently comes with it.

In situations like the one I’ve described here, the learning curve is high. And, what’s more, the pride of having the tribal knowledge reinforces the steep learning curve as a good thing. After all, that tribal knowledge is what separates the wheat from the chaff, and it forces tenure and seniority to matter as much as technical acumen and problem-solving ability. What’s more, it ties knowledge of the problem domain to knowledge of the specific solution, meaning that knowing all of the weird quirks of the software is often conflated with understanding the business domain on which the software operates.

This is a problem for several reasons, by my way of thinking:

  1. A steep learning curve is not a good thing. Adding people to the project becomes difficult and the additions are more likely to break things.
  2. The fact that only a few chosen Keepers of the Tribal Knowledge understand how things work means that their absence would be devastating, should they opt to leave or be out of the office.
  3. The need to know an endless slew of tips and tricks in order to work on a code base means that the code base is unintuitive and difficult to maintain.  Things will degenerate as time goes on, even with heroes and tribal knowledge.
  4. When any question or idea raised by somebody newer to the project can be countered with “you just don’t know about all the factors,” new ideas tend to get short shrift, and the cycle of special tribal knowledge is reinforced that much further.
  5. Essentially, people are being rewarded for creating code that is difficult to understand and maintain.
  6. More philosophically, this tends to create a group dynamic where knowledge hoarding is encouraged and cooperation is minimized.

What’s the Alternative

What I’m proposing instead of the tribal knowledge system and its keepers is a software development group dynamic where you add a customer or stakeholder. Naturally, you have your user base and any assorted marketing or PM types, but the user you add is the “newbie.” You develop classes, controls, methods, routines, etc, all the while telling yourself that someone with limited knowledge of the technologies involved should be able to piece together how to use it. If someone like that wouldn’t be able to use it, you’re doing something wrong.

After all, for the most part, we don’t demand that our end users go through a convoluted sequence of steps to do something. It is our job and our life’s work to automate and simplify for our user base. So, why should your fellow programmers–also your users in a very real sense–have to deal with convoluted processes? Design your APIs and class interfaces with the same critical eye for ease of use that you do the end product for your end user.

One good way to do this is to use people new to the group as ‘testers.’ They haven’t had time to learn all of the quirks and warts of the software that you’ve lived with for months or years. So ask them to code up a simple addition to it and see where they get snagged and/or ask for help. Then, treat this like QC. When you identify something unintuitive to them, refactor until it is intuitive. Mind you, I’m not suggesting that you necessarily take design suggestions from them at that point any more than you would take them from a customer who has encountered sluggishness in the software and has ‘ideas’ for improvement. But you view their struggles as very real feedback and seek to improve.

Other helpful ways to combat the Keepers of the Tribal Knowledge Culture are as follows:

  1. Have a specific part of code reviews or even separate code reviews that focus on the usability of the public interfaces of code that is turned in.
  2. Avoid global state. Global state is probably the leading cause of weird work-arounds, temporal coupling, and general situations where you have to memorize a series of unintuitive rules to get things done.
  3. Unit tests! Forcing developers to use their own API/interface is a great way to prevent convoluted APIs and interfaces.
  4. Have a consistent design philosophy and some (unobtrusive) programming conventions.
  5. Enforce a pattern of using declarative and descriptive names for things, even if they are long or verbose. Glancing at a method, GetCustomerInvoicesFromDatabase() is a lot more informative than GtCInvDb(). Saving a few bytes with short member names hasn’t been helpful for twenty years.

By

Divide And Conquer

What Programmers Want

In my career, I’ve participated in projects that have run the gamut of degrees of collaboration. That is to say, I’ve written plenty of software on which I served as architect, designer, implementor, tester, and maintainer and I’ve also worked on projects where I was a cog in a much larger effort. I have been a lead, and I have been a junior developer. But, throughout all of these experiences, I have been an observer, thinking through what worked and what didn’t, what made people happy and what made them frustrated, and what could be done to improve matters.

I have found that I tend to be at my happiest when working by myself or as a lead, and I tend to be least happy when I’m working as a cog. When I first came to this realization, I chalked it up to me not being a “team player” and endeavored to fix this about myself, seeking out opportunities to put myself in the situation and find enjoyment in it. However, I came to realize that I had been subtly incorrect in my self-assessment. It wasn’t that I had an issue with not calling the shots on a project, but rather that I had an issue when there was no decision, however small, that was left up to my discretion–that is to say, if I was working in an environment where a manager, technical lead, or someone else wanted to sign off on anything I did, whether it was as large as submitting a rewrite proposal or as small as what I named local variables inside of my methods or what format I used for code comments.

This was subsequently born out by the experience of working in a collaborative environment where I was not in charge of major decisions, but I was in charge of and responsible for my particular module. I didn’t get to decide what my inputs or outputs would be, but I did get to decide internally how things would work and be designed. I was happy with this arrangement.

Speaking philosophically, I believe that this is important for anyone with a creative spirit and a sense of pride in the work that they do. It doesn’t matter whether the person doing the work is a seasoned professional or an intern. Being able to make decisions, if only small ones, creates a sense of ownership and pride and promotes creative expression. Being denied those decisions creates a sense of apathy about one’s work. In spite of their best intentions, people in the latter situation are going to be inclined to think “what do I care if this works — it wasn’t my idea.” I have experienced firsthand being asked to do something in ‘my’ code and thinking “this is going to fail, but hey, you’re in charge.” This sentiment only begins to occur when you’ve learned by experience that taking the initiative to fix or improve things will result in getting chewed out or being told not to do anything without asking. Someone in this situation will be motivated only by a desire not to get scolded by the micromanagers of the world.

Man Of The People: How to Satisfy Your Programmers

Over time, thinking on this subject has led me to some conclusions about optimal strategies for structuring teams or, at least, dividing up work. I firmly believe in giving each team member a sphere of responsibility or an area of ownership and letting them retain creative control over it until such time as it is proved detrimental. Within that person’s sphere of control, decisions are up to him. It’s certainly reasonable to make critical assessments of what he’s doing or request that he demonstrate the suitability of his approach, but I believe the best format here would be for reviewers or peers to present a perceived better way of doing things and allow the decision to rest with him.

To combat situations where failures may occur, it’s important to create a process where failures are exposed early on. I think the key here is a good division of labor on the development project which, not coincidentally, coincides with good software design practice. Break the effort into modules and appoint each person in charge of his or her own module. One of the first tasks is to define and prioritize the interfaces between the modules, the latter being done based on what is a prerequisite for what. So, if the group is building an expense reporting system, the person in charge of the data access layer should initially provide a small but functional interface so that the person writing the service layer can stub out mock objects and use them for development.

With interfaces defined up front, the project can adopt a practice of frequent integration and thus failures can be detected early. If it is known from the beginning that failure to live up to one’s commitments to others is the only vehicle by which creative control might be stripped via intervention, people will be motivated early by a sense of pride. If they aren’t, then they probably don’t take much pride in their work and would likely not be bothered by the cession of creative control that will follow. Either that, or they aren’t yet ready and will have to wait until the next project to try again. But, in any event, failure to live up to deadlines happens early and will not adversely affect the project. The point is that people are trusted to make good decisions in their spheres of control until they demonstrate that they can’t be trusted. They are free to be creative and prove the validity of their ideas.

If the project is sufficiently partitioned and decoupled with a good architecture, there won’t be those late-in-the-game integration moments where it is realized that module A is completely unworkable with module B. And, as such, there is no need for the role of “cynical veteran that rejects everything proposed by others as unworkable or potentially disastrous.” I think everyone reading can probably recall working with this individual, as there is no shortage of them in the commercial programming world.

I believe there are excellent long term benefits to be had with this strategy for departments or groups. Implemented successfully, it lends itself well to good software design process, though I wouldn’t say it causes it, since a good architecture and integration process is actually a requirement for this to work. Over the long haul, it will allow people hired for their credentials to provide the organization with the benefits of their creativity and ingenuity. And, part and parcel with this is that it is likely to create productive and happy team members who feel a sense of responsibility for the work that they’ve created.

When All Doesn’t Go According to Plan

Of course, there is a potential down side to all of this — it allows for another role with which we’re all probably familiar: “anti-social guy who thinks he’s valuable because he’s a genius but in reality writes such incomprehensible code that no one will so much as glance at it for fear of getting a headache.” Or, perhaps less cynically, you might have the master of the oddball solution. Given creative control of their own projects, these archetypes might create a sustainability mess that they can (or maybe can’t) maintain on their own, but if someone else needs to maintain it, it’s all over.

The solution I propose to this is not to say, “Well, we’ll give people creative control unless we really don’t like the looks of what they’re doing.” That idea is a slippery slope toward micromanaging. After all, one person’s mess may be another’s readable perfection. I think the solution is (1) to make things as objective as possible; and (2) to allow retention of a choice no matter what. I believe this can be accomplished with a scheme wherein some (respected) third party is brought in to examine the code of different people on the team. The amount of time it takes for this person to understand the gist of what the code is doing is recorded. If this takes longer than a certain benchmark, then the author of the difficult-to-understand code is offered a choice – refactor toward easier understanding or spend extra time exhaustively documenting the design.

In this manner, we’re enforcing readability and maintainability while still offering everyone creative control. If you want to write weird code, that’s fine as long as it doesn’t affect anyone else and you go the extra mile to make it understandable. It’s your choice.

Being Flexible

We can, at this point, dream up all manner of different scenarios to try to poke holes in what’s here so far. However, I argue that we’ve established a good heuristic for how to handle them – objective arbitration and choice. Whatever we do, we find a way to allow people to retain creative control over their work and have this justified through objective standards. I hypothesize that the dividends on productivity and team buy-in will counteract, in spades, any difficulties that arise from occasional lack of homogeneity and learning curve in maintenance. And I think these difficulties will be minor anyway, since the nature of the process mandates a decoupled interface with clearly defined specs coming first.