DaedTech

Stories about Software

By

There is No Such Thing as Waterfall

Don’t try to follow the waterfall model. That’s impossible. Instead, only try to realize the truth: there is no waterfall.

Waterfall In Practice Today

I often see people discuss, argue and debate the best approach or type of approach to software development. Most commonly, this involves a discussion of the merits of iterative and/or agile development, versus the “more traditional waterfall approach.” What you don’t see nearly as commonly, but do see every now and then is how the whole “waterfall” approach is based on a pretty fundamental misunderstanding, wherein the man (Royce) who coined the term and created the iconic diagram of the model was holding it up as a strawman to say (paraphrased) “this is how to fail at writing software — what you should do instead is iterate.” Any number of agile proponents may point out things like this, and it isn’t too hard to make the case that the waterfall development methodology is flawed and can be problematic. But, I want to make the case that it doesn’t even actually exist.

I saw a fantastic tweet earlier from Energized Work that said “Waterfall is typically 6 months of ‘fun up front’, followed by ‘steady as she goes’ eventually ramping up to ‘ramming speed’.” This is perfect because it indicates the fundamental boom and bust, masochistic cycle of this approach. There is a “requirements phase” and a “design phase”, which both amount basically to “do nothing for a while.” This is actually pretty relaxing (although frustrating for ambitious and conscientious developers, as this is usually unstructured-unstructured time). After a few weeks or months or whatever of thumb-twiddling, development starts, and life is normal for a few weeks or months while the “chuck it over the wall” deadline is too far off for there to be any sense of how late or bad the software will turn out to be. Eventually, though, everyone starts to figure out that the answers to those questions are “very” and “very”, respectively, and the project kicks into a death march state and “rams” through the deadline over budget, under-featured, and behind schedule, eventually wheezing to some completion point that miraculously staves off lawyers and lawsuits for the time being.

This is so psychically exhausting to the team that the only possible option is 3 months of doing nothing, er, excuse me, requirements and design phase for the next project, to rest. After working 60 hour weeks and weekends for a few weeks or months, the developers on the team look forward to these “phases” where they come in at 10 AM, leave at 4 PM, and sit around writing “shall” a lot, drawing on whiteboards, and googling to see if UML has reached code generation Shangri La while they were imprisoned in their cubicles for the last few months. Only after this semi-vacation are they ready to start the whole chilling saga again (at least the ones that haven’t moved on to greener pastures).

Diving into the Waterfall

So, what actually happens during these phases, in a more detailed sense, and what right have I to be so dismissive of requirements and design phases as non-work? Well, I have experienced what I’m describing firsthand on any number of occasions, and found that most of my time is spent waiting and trying to invent useful things to do (if not supporting the previous release), but I realize that anecdotal evidence is not universally compelling. What I do consider compelling is that after these weeks or months of “work” you have exactly nothing that will ever be delivered to your end-users. Oh, you’ve spent several months planning, but when was the last time planning something was worked at anywhere near as much as actually doing something? When you were a high school or college kid and given class time to make “idea webs” and “outlines” for essays, how often was that time spent diligently working, and how often was that time spent planning what to do next weekend? It wasn’t until actual essay writing time that you stopped screwing around. And, while we like to think that we’ve grown up a lot, there is a very natural tendency to regress developmentally when confronted with weeks of time after which no real deliverable is expected. For more on this, see that ambitious side project you’ve really been meaning to get back into.

But the interesting part of this isn’t that people will tend to relax instead of “plan for months” but what happens when development actually starts. Development starts when the team “exits” the “design phase” on that magical day when the system is declared “fully designed” and coding can begin. In a way, it’s like Christmas. And the way it’s like Christmas is that the effect is completely ruined in the first minute that the children tear into the presents and start making a mess. The beautiful design and requirements become obsolete the second the first developer’s first finger touches the first key to add the first character to the first line of code. It’s inevitable.

During the “coding phase”, the developers constantly go to the architect/project manager/lead and say “what about when X happens — there’s nothing in here about that.” They are then given an answer and, if anyone has time, the various SDLC documents are dutifully updated accordingly. So, developers write code and expose issues, at which time requirements and design are revisited. These small cycles, iterations, if you will, continue throughout the development phase, and on into the testing phase, at which time they become more expensive, but still happen routinely. Now, those are just small things that were omitted in spite of months of designing under the assumption of prescience — for the big ones, something called a “change request” is required. This is the same thing, but with more emails and word documents and anger because it’s a bigger alteration and thus iteration. But, in either case, things happen, requirements change, design is revisited, code is altered.

Whoah. Let’s think about that. Once the coding starts, the requirements and design artifacts are routinely changed and updated, the code changes to reflect that, and then incremental testing is (hopefully) done. That doesn’t sound like a “waterfall” at all. That sounds pretty iterative. The only thing missing is involving the stakeholders. So, when you get right down to it, “Waterfall” is just dysfunctional iterative development where the first two months (or whatever) are spent screwing around before getting to work, where iterations are undertaken and approved internally without stakeholder feedback, and where delivery is generally late and over budget, probably by an amount in the neighborhood of the amount of time spent screwing around in the beginning.

The Take-Away

My point here isn’t to try to persuade anyone to alter their software development approach, but rather to try to clarify the discussion somewhat. What we call “waterfall” is really just a specifically awkward and inefficient iterative approach (the closest thing to an exception I can think of is big government projects where it actually is possible to freeze requirements for months or years, but then again, these fail at an absolutely incredible rate and will still be subject to the “oh yeah, we never considered that situation” changes, if not the external change requests). So there isn’t “iterative approach” and “waterfall approach”, but rather “iterative approach” and “iterative approach where you procrastinate and scramble at the end.” And, I don’t know about you, but I was on the wrong side of that fence enough times as a college kid that I have no taste left for it.

By

The Death of the Obligatory Comment

The Ben Franklin Effect

Have you ever started looking for something, let’s say your wallet, and been fairly certain that it wasn’t in your house? You started looking in all of the most likely places and progressed with increasing glumness to less and less likely landing spots. Perhaps you eventually wound up in the crawl space that you hadn’t entered since six months prior to losing your wallet. If you’ve had an experience like that, do you recall a point at which you thought to yourself, “this is completely pointless” and yet you kept doing it anyway?

Even if you can’t recall an experience like this, I imagine that you’ve seen similar stories unfold in countless other places. In politics and government for example, can you count the number of times some matter of policy has proven to be an obvious mistake and it keeps going anyway like an unstoppable juggernaut with resolute phrases like “stay the course” uttered as sober non sequitur? In the end it all seems silly; it seems to be a way to say “we know this is a mistake and we’re going to keep doing it anyway.”

I believe that this mystifying behavior has two main sources of persistence as a mainstay in the human condition. The first is that without careful consideration, we tend toward logical fallacy and this is an example of the fallacy “appeal to consequences” (aka “wishful thinking”). In other words, “I’m going to look for my wallet in the crawlspace because I believe it’s in there since it would be a real bummer if it weren’t.” We tend to double down on our mistakes because we really wish we hadn’t made them.

The second source of this is a truly fascinating study of our motivations as humans for our actions and interactions. A blog I really like called “You Are Not So Smart” defines this as “the Ben Franklin Effect“. I highly recommend a start-to-finish read of this post, and will warn you here that my next paragraph will be a spoiler, so go read it, please.

Oh, hi there. You’re back? Cool! As you now know, the premise of the post is that Ben Franklin figured out a way to turn someone who didn’t like him (a “hater”) into an ally. He asked that person to do him a favor… and it worked. As it turns out, we tend to like people because we do them favors, rather than doing them favors because we like them. The reason for this is that we construct narratives about our motivations and preferences that rationalize our past decisions; “I must like that guy since otherwise me doing him a favor would have been stupid and since I’m obviously not stupid…” (for you rhetoric buffs, this is another logical fallacy called “begging the question”)

David McRaney puts this nicely in his post:

If you live in the Deep South you might buy a high-rise pickup and a set of truck nuts. If you live in San Francisco you might buy a Prius and a bike rack. Whatever are the easiest to obtain, loudest forms of the ideals you aspire to portray become the things you own, like bumper stickers signaling to the world you are in one group and not another. Those things then influence you to become the sort of person who owns them.

What Does this Have to Do With Code Comments?

I’m glad you asked. The answer is that for me, and I suspect for a lot of you, putting comment headers above all methods, classes, fields, etc is a doubling down on a behavior that probably makes no sense, a staying of the course, and a thing that we like doing because we did it.

When I was in college, I don’t think I ever commented my code. At that time, as I recall, we were young and naive and wore code unreadability on our sleeves as a badge of honor; anyone who could execute an entire complex looping sequence in the condition and increment statements of a for loop was a total ninja, after all. When I left school and went to the professional world, I was confronted with people who wrote various kinds of code, but one was a Linux kernel hacker that I truly respected and I noticed that he dutifully put comment blocks above all of his methods. Included in these were descriptions of what the method did, what it returned, what the arguments were, what invariants it preserved, and a journal log of edits made to it.

This was an epiphany for me. All of those guys with their non-commented, zany for loops were amateurs. Real programmers did that kind of ninja stuff and then put a nice, clear comment on top that explained to mere mortals how awesome they were. I was hooked. I wanted to be that guy and so I became a guy that wrote those kind of comments above methods and everywhere else. I didn’t do this halfway — I was the kind of guy that went the extra mile to thoroughly document everything.

Years, jobs and programming languages later, this practice persisted, long outlasting my opinion that it was a badge of honor to cram multiple execution statements into loop iterators and furrow people’s brows with inscrutable pointer arithmetic (having gravitated toward code that was clear, readable and maintainable). A funny thing started happening to me, though. I started reading blog posts like this one, where people said things like:

What makes comments so smelly in general? In some ways, they represent a violation of the DRY (Don’t Repeat Yourself) principle, espoused by the Pragmatic Programmers. You’ve written the code, now you have to write about the code.

First, where are comments indeed useful (and less smelly)? If you are writing an API, you need some level of generated documentation so that people can use your API. JavaDoc style comments do this job well because they are generated from code and have a fighting chance of staying in sync with the actual code. However, tests make much better documentation than comments. Comments always lie (maybe not now, but on a long enough timeline, all comments will become outdated). Tests can’t lie or they fail. When I’m looking at work in progress on projects, I always go to the tests first. The comments may or may not be there, but the tests define what’s now done and working.

The first few times I read or heard arguments like this, I dismissed them as the work of people too lazy to write comments. As I started reading more such posts… I simply stopped thinking about them. It was a silly argument — after all, I had spent countless hours throughout my career writing comments and I wouldn’t spend all that time doing something that might not be a good idea. A friend of mine likes to satirize this type of reasoning by saying “if the facts don’t fit the dogma, the facts must be discarded,” and while the validity of comments in code is truly a matter of opinion rather than fact, the same logic applies to my reasoning. I discarded an argument because not discarding it would have made me feel bad about prior decisions that I had made.

Nagging Doubts and Real Conclusions

In spite of my best efforts simply to ignore arguments that I didn’t like, I was plagued by cognitive dissonance on the matter. Why would otherwise rational and obviously quite intelligent people say such ridiculous things about my beloved exhaustive commenting? Determined to reconcile these divergent beliefs, I decided one day while documenting code smells on a wiki for a former employer that they didn’t really mean that comments were a code smell in general – they must only be talking about stupid comments like putting “//this is a for loop” above a for loop. Yeah, of course. That’s what they meant.

But, it wasn’t. In spite of myself, I had read and processed the arguments. I knew that comments weren’t DRY and that they represented duplication of the logic of the code. I knew that heavily commented code tended to mask poor naming choices and unintuitive abstractions. I knew that comments, even XML/Javadoc comments for public-facing APIs tended to start out helpful and accurate and then to rot over time as different people took over maintenance and didn’t think to change them along with the code. Heck, I’d experienced that one firsthand with inaccuracies and inconsistencies piling up as surely as they do in a non-normalized database.

So, eventually I was forced to see the light and admit that it was possible and even probable that my efforts had been a complete waste of time over the years. Sound harsh? Well, here’s the thing. How many of those comments that I wrote are still floating around in some code base somewhere, not deleted? And of that percentage, how many are accurate? I’m guessing the percentage is tiny — the comments are dust in the wind. And before they blew away, what are the odds that anyone cared enough to read them? What are the odds that anyone cares that “ebd changed a char* to a int* on 4/15/04”?

So, I’ve stopped viewing comments as obligatory. I write them now with a sort of “when in Rome” sentiment if others do it on the code I’m working on. After all, I have plenty of practice with it. But my real preference, in light of my focus on abstractions, is now to have the attitude that I am forbidden from writing comments. If I want to make my code usable and understandable, I have to do it with tight abstractions, self-documenting code and making bad decisions impossible. Is that a lofty goal? Sure. But I think it’s a good one. I’ve gone from viewing comments as obligatory to viewing them as indicative of small design failures, and I’m content with that. I think that all along part of me viewed commenting my methods as redundant and I’ve finally rid myself of the cognitive dissonance.

By

Why the Statement “I Don’t Source Control My Unit Tests” Makes Me Happy

An Extraordinary Claim

I was talking to a friend the other day, and he relayed a conversation that he’d had with a co-worker. They were discussing writing unit tests and the co-worker claimed that he writes unit tests regularly, but discards them and never checks them into source control. I was incredulous at this, so I asked what the rationale was, to which my friend cited his co-worker’s claims (paraphrased) that “[unit tests] are fragile, they take too long to maintain, they prohibit adapting to new requirements, errors are caused by user change requests not incorrect code, and they’re hard to understand.” (emphasis mine, and I’ll return to this later).

This struck me as so bizarre — so preposterous — that I immediately assumed I was out of my league and had missed an industry sea-change. I remember a blog post by Scott Hanselman entitled “I’m A Phony. Are You?” and I felt exactly what he meant. Clearly, I’d been exposed. I was not only wrong about unit tests, but I was 180 degrees, polar opposite, comically wrong. I had wasted all of this time and effort with TDD, unit test maintenance, unit tests in the build, etc when what I really should have been doing was writing these things once and then tossing them since their benefit is outweighed by their cost.

Given that I don’t have direct access to this co-worker, I took to the internet to see if I could locate the landmark post or paper that had turned the tide on this subject and no doubt a whole host of other posts and papers following suit. So, I googled “Don’t Check In Unit Tests”. Your mileage may vary because I was logged into gmail, but I saw old posts by Steve Sanderson and Jeff Atwood encouraging people to write tests, a few snarky posts about not testing, and other sorts of things you might expect. I had missed it. I tried “Don’t version control unit tests” and saw similar results. A few variants on the same and similar results too. Apparently, most of the internet was still thinking inside of the box that the co-worker had escaped with his bold new vision.

Finally, I googled “Should I source control unit tests?” and found one link from programmer’s stack exchange and another from stackoverflow. In a quick and admittedly very rough count of people who had answered and/or voted one way or the other, the vote appeared to be about 200 to 0 for source controlling versus leaving out, respectively. The answer that most succinctly seemed to summarize the unanimous consensus was Greg Whitfield’s, which started with:

Indeed yes. How could anyone ever think otherwise?

Hmmm…

There’s Really No Argument

With my confidence somewhat restored, I started thinking of all the reasons that source controlling unit tests makes sense:

  1. Provides living ‘documentation’ of intentions of class author.
  2. Prevents inadvertent regressions during changes.
  3. Allows fearless refactoring of your code and anyone else’s.
  4. Incorporation into the build for metric and verification purposes.
  5. Testing state at the time of tagged/branched versions can be re-created.
  6. (As with source controlling anything) Hard drive or other hardware failure does not result in lost work.

I could probably go on, but I’m not going to bother. Why? Because I don’t think that the rationale are actually rationale for not checking in unit tests. I think it’s more likely that this person who “writes tests but doesn’t actually check them in” might have had a Canadian girlfriend as a teenager. In other words, I sort of suspect that these “disposable” unit tests, whose existence can neither be confirmed nor denied, may not necessarily actually exist and so the rationale for not checking them in becomes, well, irrelevant.

And, look at the rationale. The first three clauses (fragile, too long to maintain, prohibit new work) seem to be true only for someone who writes really bad unit tests (i.e. someone without much practice who may, I dunno, be discouraged during early attempts). Because while it’s true that unit tests, like any other code, constitute a maintenance liability, they need not be even remotely prohibitive, especially in a nicely decoupled code base. The fourth clause (errors are everyone’s fault but the programmer) is prima facie absurd, and the fifth and most interesting clause seems to be an unwitting admission of the problem – a difficulty understanding how to unit test. This is probably the most important reason for “not checking in” or, more commonly and accurately, not writing unit tests at all — they’re hard when you don’t know how to write them and haven’t practiced it.

So Why the Happy?

You’re probably now wondering about the post title and why any of this would make me happy. It’s simply, really. I’m happy that people feel the need to make excuses (or phantom unit tests) for why they don’t automate verification of their work. This indicates real progress toward what Uncle Bob Martin describes in a talk that he does called “Demanding Software Professionalism” and alludes to in this post. In his talk, Bob suggests that software development is a nascent field compared to others like medicine and accounting and that we’re only just starting to define what it means to be a software professional. He proposes automated verification in the form of TDD as the equivalent of hand-washing or double-entry bookkeeping respectively in these fields and thinks that someday we’ll look back on not practicing TDD and clean coding the way we now look back on surgeons that refused to wash their hands prior to surgery.

But having a good idea and even demonstrating that it works isn’t enough. Just ask Ignaz Semmelweis who initially discovered, and empirically demonstrated that hand-washing reduced surgery mortality rates, only to be ridiculed and dismissed by his peers in the face of cold-hard evidence. It wasn’t until later, after Semmelweis had been committed to an insane asylum and died that his observations and hypothesis got more backers (Lister, Pasteur, et al) and a better marketing campaign (an actual theoretical framework of explanation called “Germ Theory”). In Semmelweis’s time, a surgeon could just call him a crank and refuse to wash his hands before surgery. Decades later, he would have to say, “dude, no, I totally already washed them when you weren’t looking” if he was feeling lazy. You can even ask his Canadian girlfriend.

At the end of the day, I’m happy because the marketing for TDD and clean coding practice must be gaining traction and acceptance if people feel as though they have to make excuses for doing it. I try to be a glass half full kind of guy, and I think that’s the glass half full outlook. I mean one person not wanting to automate tests doesn’t really matter in the scheme of things at the moment since there is no shortage of people who also don’t want to, but it’s good to see people making excuses/stories for not wanting to rather than just saying “pff, waste of time.”

(And, for what it’s worth, I do acknowledge the remote possibility that someone actually does write and discard unit tests on a regular and rigorous basis. I just don’t think it’s particularly likely.)

By

Improve Productivity with the Humble ToDo List

Micro-Scrum

A week or two ago, I read Stephen Walther’s blog post “Scrum in 5 Minutes” and reading his description of the backlog reminded me of a practice that I’ve been getting a lot of mileage out of lately. My practice, inspired by Kent Beck in his book Test Driven Development By Example, is to keep a simple To-Do list of small development tasks as I work.

The parallels here are rather striking if you omit the portions of Scrum that have to do with collaboration and those types of logistics. When starting on a task, I think of the first few things that I’ll need to do and those go on the list. I prioritize them by putting the most important (usually the ones that will block progress on anything else) at the top, but I don’t really spend a lot of time on this, opting to revise or refine it if and when I need to. Any new item on the list is yellow and when done, I turn it green.

There are no intermediate states and there is no going back. If I have something like “create mortgage calculator class” and I turn it green when I’m happy with the class, I don’t later turn that back yellow or some other color if the mortgage calculator needs to change. This instead becomes a new task. Generally speaking, I try to limit the number of yellow tasks I have (in kind of a nod to Kanban’s WIP limits), though I don’t have a hard-fast rule for this. I just find that my focus gets cluttered when there are too many outstanding tasks.

If I find that a yellow item is taking me a long time, I will delete that item and replace it with several components of it. The aim is always to have my list be a series of tasks that take 5-15 minutes to complete (though they can be less). Items are added both methodically to complete the task and as reminders of things that occur to me when I’m doing something else. For example, if I fire up the application to verify a piece of integration that involves a series of steps and I notice that a button is the wrong color, I won’t drop everything and sidetrack myself by changing the button. I’ll add it to my queue; I don’t want to worry about this now, but I don’t want to forget about it.

I never actually decided on any of these ‘rules’. They all kind of evolved through some evolutionary process algorithm where I kept practices that seemed to help me and dropped ones that didn’t. There will probably be more refinement, but this process is really helping me.

So, What Are the Benefits

Here is a list of benefits that I see, in no particular order:

  1. Forces you to break problem into manageable pieces (which usually simplifies it for you).
  2. Helps prevent inadvertent procrastination because task seems daunting.
  3. Encourages productivity with fast feedback and “wins”.
  4. Prevents you from forgetting things.
  5. Extrapolated estimation is easier since you’re tracking your work at a more granular level.
  6. Helps you explain sources of complexity later if someone needs to know why you were delayed.
  7. Mitigates interruptions (not as much “alright, what on Earth was I doing?”)

Your mileage may vary here, and you might have a better process for all I know (and if you do, please share it!). But I’ve found this to be helpful enough to me that I thought I’d throw it out there in case it helped anyone else too.

By

A Developer Journal – Genius or Neurosis?

Many moons ago in my first role as a developer, I had very little real work to do for the first month or so on the job, so I occupied myself with poking around the company intranet, jotting down acronyms, figuring out who was responsible for what, and documenting all of this in spreadsheets and word documents. After a while, I setup a mediawiki installation and started making actual wiki pages out of all of these thoughts. Some time (and employers) after that, this practice caught on an bit and I found myself in a position where others started using and at least getting some value out of the wikis.

For the last couple of years now, I’ve also been blogging, and before that I was in a grad program where I wrote term papers, research papers, etc. Both of these activities are a bit more focused than knowledge dumps on a wiki, but they are also forms of chronicling my experiences. So, long story short, for the entirety of my career, I’ve been heavily documenting pretty much everything I do.

When I moved into my house, I found a bunch of memorabilia and personal keepsakes stuffed in the attic. In an attempt to figure out who they belonged to, I read through some journals that were there and found that they consisted of incredibly mundane chronicling of days – what the weather was like, time awake and asleep, grocery trips, etc. It is my hope that my own chronicling of my developer life is not quite as banal as this, but even if it is, c’est la vie I suppose. And who knows, perhaps the author of those journals needed this information for some purpose I couldn’t discern (tracking a medical condition, staying organized and focused, etc).

In honor of this mystery person in my attic and my own natural tendency over the course of time toward more and more documentation, I’ve decided to start my own “developer” journal and I’ve logged my first entries this week. The journal is just a Word document at the moment, so I’m getting back to basics from my previous ascent through Excel, Mediawiki and WordPress, but I think this is good. All of those recording forms have a tendency toward hierarchical or formal organization that I don’t really want here. This is like me jotting notes during meetings in a notebook, but with less “action item: give Bill the TPS reports” and more “I just spent an hour trying to figure out why my CSS file was triggering an error and it turned out to be unrelated problem X in reality”.

Here’s what I do so far. I spend a sentence or two describing what I worked on during various time windows throughout the day or if I switch tasks. Given that I do work where clients are billed for my time, it makes a lot of sense to document that for later when I’m filling out more formal accountings of my work (though I mainly use Grindstone for this because of its precision and UI, it’s also kind of nice to have it “backed up” in narrative form for context).

In addition to that bit of context, I make notes any time someone helps me with something, introduces me to something new, etc. After all, there’s nothing worse than when you ask someone how to do X, get distracted for a few minutes, go to do X, and realize you need to ask again. I try to avoid looking like an idiot whenever possible, even if it isn’t always easy. So assists, notes, code review suggestions, etc go in here too.

And finally, I have two other things that I do. In green italics, I insert “lessons learned”. This is something like “Lesson Learned: if you compile a WPF project in VS 2010 with a XAML file focused in the XAML editor, you’ll sometimes get spurious compiler errors.” So, this is a more crystallized form of notes in that it focuses on things that I’ll probably want to remember later. The other thing is concerns/observations/suggestions, and that gets orange italics. This is things like “I see a lot of duplication here in this code and that’s a code smell, but I don’t yet have enough context to speak authoritatively.” The orange will function as a way for me to keep track of things that I think could be improved (previously, I’ve always kept a spreadsheet somewhere called “suggested refactorings” or something like that. I color code these things because I feel like at some point later I may want to assemble them into a list.

So here’s my thinking with this. I like to write and document, as should be obvious from my blogging and other documenting activities. But, there’s a clear difference between putting together nice, composed presentations/posts/essays and simply recording every thought that makes its way into your brain. The developer journal is a way to get the best of both worlds. I can jot stuff down that I’m not sure but I think might be important or that I might want to remember later, but without boring people in a wiki/blog/etc if it turns out not to matter. I guess you could say I’m keeping the journal so that I can remember more of what I think while also applying a better filter.

Does anyone else do anything like this? If not (or if so), does this seem like a good idea, or does this just seem neurotic and weird? Would you do something like this? Please feel free to weigh in below in the comments.