DaedTech

Stories about Software

By

How Developers Stop Learning: Rise of the Expert Beginner

Beyond the Dead Sea: When Good Software Groups Go Bad

I recently posted what turned out to be a pretty popular post called “How to Keep Your Best Programmers,” in which I described what most skilled programmers tend to want in a job and why they leave if they don’t get it.

Today, I’d like to make a post that works toward a focus on the software group at an organization rather than on the individual journeys of developers as they move within or among organizations. This post became long enough as I was writing it that I felt I had to break it in at least two pieces. This is part one.

In the previous post I mentioned, I linked to Bruce Webster’s “Dead Sea Effect” post, which describes a trend whereby the most talented developers tend to be the most marketable and thus the ones most likely to leave for greener pastures when things go a little sour. On the other hand, the least talented developers are more likely to stay put since they’ll have a hard time convincing other companies to hire them.

This serves as important perspective for understanding why it’s common to find people with titles like “super-duper-senior-principal-fellow-architect-awesome-dude,” who make a lot of money and perhaps even wield a lot of authority but aren’t very good at what they do. But that perspective still focuses on the individual. It explains the group only if one assumes that a bad group is the result of a number of these individuals happening to work in the same place (or possibly that conditions are so bad that they drive everyone except these people away).

Dale will tell you what’s wrong with so-called professional ORMs.

I believe that there is a unique group dynamic that forms and causes the rot of software groups in a way that can’t be explained by bad external decisions causing the talented developers to evaporate. Make no mistake–I believe that Bruce’s Dead Sea Effect is both the catalyst for and the logical outcome of this dynamic, but I believe that some magic has to happen within the group to transmute external stupidities into internal and pervasive software group incompetence.

In the next post in this series, I’m going to describe the mechanism by which some software groups trend toward dysfunction and professional toxicity. In this post, I’m going to set the stage by describing how individuals opt into permanent mediocrity and reap rewards for doing so.

Learning to Bowl

Before I get to any of that, I’d like to treat you to the history of my bowling game. Yes, I’m serious.

I am a fairly athletic person. Growing up, I was always picked at least in the top 1/3rd or so of any people, for any sport or game that was being played, no matter what it was. I was a jack of all trades and master of none. This inspired in me a sort of mildly inappropriate feeling of entitlement to skill without a lot of effort, and so it went when I became a bowler.

Most people who bowl put a thumb and two fingers in the ball and carefully cultivate tossing the bowling ball in a pattern that causes the ball to start wide and hook into the middle. With no patience for learning that, I discovered I could do a pretty good job faking it by putting no fingers and thumbs in the ball and kind of twisting my elbow and chucking the ball down the lane.

It wasn’t pretty, but it worked.

It actually worked pretty well the more I bowled, and, when I started to play in an after work league for fun, my average really started to shoot up. I wasn’t the best in the league by any stretch–there were several bowlers, including a former manager of mine, who averaged between 170 and 200, but I rocketed up past 130, 140, and all the way into the 160 range within a few months of playing in the league.

Not too shabby.

But then a strange thing happened. I stopped improving. Right at about 160, I topped out.

I asked my old manager what I could do to get back on track with improvement, and he said something very interesting to me. Paraphrased, he said something like this:

There’s nothing you can do to improve as long as you keep bowling like that. You’ve maxed out. If you want to get better, you’re going to have to learn to bowl properly.

You need a different ball, a different style of throwing it, and you need to put your fingers in it like a big boy. And the worst part is that you’re going to get way worse before you get better, and it will be a good bit of time before you get back to and surpass your current average.

I resisted this for a while but got bored with my lack of improvement and stagnation (a personal trait of mine–I absolutely need to be working toward mastery or I go insane) and resigned myself to the harder course. I bought a bowling ball, had it custom drilled, and started bowling properly.

Ironically, I left that job almost immediately after doing that and have bowled probably eight times in the years since, but c’est la vie, I suppose. When I do go, I never have to rent bowling shoes or sift through the alley balls for ones that fit my fingers.

Dreyfus, Rapid Returns and Arrested Development

In 1980, a couple of brothers with the last name Dreyfus proposed a model of skill acquisition that has gone on to have a fair bit of influence on discussions about learning, process, and practice. Later they would go on to publish a book based on this paper and, in that book, they would refine the model a bit to its current form, as shown on wikipedia.

The model lists five phases of skill acquisition:

  1. Novice
  2. Advanced Beginner
  3. Competent
  4. Proficient
  5. Expert

There’s obviously a lot to it, since it takes an entire book to describe it, but the gist of it is that skill acquirers move from “dogmatic following of rules and lack of big picture” to “intuitive transcending of rules and complete understanding of big picture.”

All things being equal, one might assume that there is some sort of natural, linear advancement through these phases, like earning belts in karate or money in the corporate world. But in reality, it doesn’t shake out that way, due to both perception and attitude.

At the moment one starts acquiring a skill, one is completely incompetent, which triggers an initial period of frustration and being stymied while waiting for someone, like an instructor, to spoon-feed process steps to the acquirer (or else, as Dreyfus and Dreyfus put it, they “like a baby, pick it up by imitation and floundering”). After a relatively short phase of being a complete initiate, however, one reaches a point where the skill acquisition becomes possible as a solo activity via practice, and the renewed and invigorated acquirer begins to improve quite rapidly as he or she picks “low hanging fruit.”

Once all that fruit is picked, however, the unsustainably rapid pace of improvement levels off somewhat, and further proficiency becomes relatively difficult from there forward. I’ve created a graph depicting this (which actually took me an embarrassingly long time because I messed around with plotting a variant of the logistic 1/(1 + e^-x) function instead of drawing a line in Paint like a normal human being).

This is actually the exact path that my bowling game followed in my path from bowling incompetence to some degree of bowling competence. I rapidly improved to the point of competence and then completely leveled off. In my case, improvement hit a local maximum and then stopped altogether, as I was too busy to continue on my path as-is or to follow through with my retooling.

This is an example of what, for the purposes of this post, I will call “arrested development.” (I understand the overlap with a loaded psychology term, but forget that definition for our purposes here, please.) In the sense of skills acquisition, one generally realizes arrested development and remains at a static skill level due to one of two reasons: maxing out on aptitude or some kind willingness to cease meaningful improvement.

For the remainder of this post and this series, let’s discard the first possibility (since most professional programmers wouldn’t max out at or before bare minimum competence) and consider an interesting, specific instance of the second: voluntarily ceasing to improve because of a belief that expert status has been reached and thus further improvement is not possible..

This opting into indefinite mediocrity is the entry into an oblique phase in skills acquisition that I will call “Expert Beginner.”

The Expert Beginner

The Road to Expert... and Expert Beginner When you consider the Dreyfus model, you’ll notice that there is a trend over time from being heavily rules-oriented and having no understanding of the big picture to being extremely intuitive and fully grasping the big picture. The Advanced Beginner stage is the last one in which the skill acquirer has no understanding of the big picture.

As such, it’s the last phase in which the acquirer might confuse himself with an Expert. A Competent has too much of a handle on the big picture to confuse himself with an Expert: he knows what he doesn’t know. This isn’t true during the Advanced Beginner phase, since Advanced Beginners are on the “unskilled” end of the Dunning Kruger Effect and tend to epitomize the notion that, “if I don’t understand it, it must be easy.”

As such, Advanced Beginners can break one of two ways: they can move to Competent and start to grasp the big picture and their place in it, or they can ‘graduate’ to Expert Beginner by assuming that they’ve graduated to Expert.

This actually isn’t as immediately ridiculous as it sounds. Let’s go back to my erstwhile bowling career and consider what might have happened had I been the only or best bowler in the alley. I would have started out doing poorly and then quickly picked the low hanging fruit of skill acquisition to rapidly advance.

Dunning-Kruger notwithstanding, I might have rationally concluded that I had a pretty good aptitude for bowling as my skill level grew quickly. And I might also have concluded somewhat rationally (if rather arrogantly) that me leveling off indicated that I had reached the pinnacle of bowling skill. After all, I don’t see anyone around me that’s better than me, and there must be some point of mastery, so I guess I’m there.

The real shame of this is that a couple of inferences that aren’t entirely irrational lead me to a false feeling of achievement and then spur me on to opt out of further improvement. I go from my optimistic self-assessment to a logical fallacy as my bowling career continues: “I know that I’m doing it right because, as an expert, I’m pretty much doing everything right by definition.” (For you logical fallacy buffs, this is circular reasoning/begging the question).

Looking at the graphic above, you’ll notice that it depicts a state machine of the Dreyfus model as you would expect it. At each stage, one might either progress to the next one or remain in the current one (with the exception of Novice or Advanced Beginner who I feel can’t really remain at that phase without abandoning the activity). The difference is that I’ve added the Expert Beginner to the chart as well.

The Expert Beginner has nowhere to go because progression requires an understanding that he has a lot of work to do, and that is not a readily available conclusion. You’ll notice that the Expert Beginner is positioned slightly above Advanced Beginner but not on the level of Competence. This is because he is not competent enough to grasp the big picture and recognize the irony of his situation, but he is slightly more competent than the Advanced Beginner due mainly to, well, extensive practice being a Beginner.

If you’ve ever heard the aphorism about “ten years of experience or the same year of experience ten times,” the Expert Beginner is the epitome of the latter. The Expert Beginner has perfected the craft of bowling a 160 out of 300 possible points by doing exactly the same thing week in and week out with no significant deviations from routine or desire to experiment. This is because he believes that 160 is the best possible score by virtue of the fact that he scored it.

Expert Beginners in Software

Software is, unsurprisingly, not like bowling. In bowling, feedback cycles are on the order of minutes, whereas in software, feedback cycles tend to be on the order of months, if not years. And what I’m talking about with software is not the feedback cycle of compile or run or unit tests, which is minutes or seconds, but rather the project.

It’s during the full lifetime of a project that a developer gains experience writing code, source controlling it, modifying it, testing it, and living with previous design and architecture decisions during maintenance phases. With everything I’ve just described, a developer is lucky to have a first try of less than six months, which means that, after five years in the industry, maybe they have ten cracks at application development. (This is on average–some will be stuck on a single one this whole time while others will have dozens.)

What this means is that the rapid acquisition phase of a software developer–Advanced Beginnerism–will last for years rather than weeks. And during these years, the software developers are job-hopping and earning promotions, especially these days. As they breeze through rapid acquisition, so too do they breeze through titles like Software Engineer I and II and then maybe “Associate” and “Senior,” and perhaps eventually on up to “Lead” and “Architect” and “Principal.”

So while in the throes of Dunning-Kruger and Advanced Beginnerism, they’re being given expert-sounding titles and told that they’re “rock stars” and “ninjas” and whatever by recruiters–especially in today’s economy. The only thing stopping them from taking the natural step into the Expert Beginner stage is a combination of peer review and interaction with the development community at large.

But what happens when the Advanced Beginner doesn’t care enough to interact with the broader community and for whatever reason doesn’t have much interaction with peers? The Daily WTF is filled with such examples.

They fail even while convinced that the failure is everyone else’s fault, and the nature of the game is such that blaming others is easy and handy to relieve any cognitive dissonance. They come to the conclusion that they’ve quickly reached Expert status and there’s nowhere left to go. They’ve officially become Expert Beginners, and they’re ready to entrench themselves into some niche in an organization and collect a huge paycheck because no one around them, including them, realizes that they can do a lot better.

Until Next Time

And so we have chronicled the rise of the Expert Beginner: where they come from and why they stop progressing. In the next post in this series, I will explore the mechanics by which one or more Expert Beginners create a degenerative situation in which they actively cause festering and rot in the dynamics of groups that have talented members or could otherwise be healthy.

Next up: How Software Groups Rot: Legacy of the Expert Beginner

No Fields Found.

By

Changing My Personal Coding Standards

Many moons ago, I blogged about creating a DX Core plugin and admitted that one of my main motivations for doing this was to automate conversion of my code to conform to a standard that I didn’t particularly care for. One of the conversions I introduced, as explained in that post, is that I like to prepend “my” as a prefix on local, method level variables to distinguish them from method parameters (they’re already distinguished from class fields, which are pretended with an underscore). I think my reasoning here was and continues to be solid, but I also think that it’s time for me to say goodbye to this coding convention.

It will be tough to do, as I’ve been in the habit of doing this for years. But after a few weeks, or perhaps even days, I’m sure I’ll be used to the new way of doing things. But why do this? Well I was mulling over a problem in the shower the other day when the idea first took hold. Lately, I’ve been having a problem where the “my” creeps its way into method parameters, thus completely defeating the purpose of this convention. This happens because over the last couple of years, I’ve been relying ever-more heavily on the “extract method” refactoring. Between Code Rush making this very easy and convenient, the Uncle Bob, clean-coding approach of “refactor ’til you drop”, and my preference for TDD, I constantly refactor methods, and this often results in what were local variables becoming method parameters but retaining their form-describing (misleading) “my”.

What to do? My first thought was “just be diligent about renaming method parameters”, but that clearly violates my philosophy that we should make the bad impossible. My second thought was to write my own refactoring and perform some behind-the-scenes black magic with the DXCore libraries, but that seems like complete overkill (albeit a fun thing to do). My third thought bonked me in the head seemingly out of nowhere: why not just stop using “my”?

I realized that the reasons I had for using it had been slowly phased out by changes my approach to coding. I wanted to be able to tell what scope a member was instantly by looking at it, but that’s pretty easy to do regardless of what you name them when you’re writing methods that are only a few lines long. It also becomes easy to tell the scope of things when you give longer, more descriptive names to local, instead of using constants. And techniques like command query separation make it rare that you need to worry about the scope of something before you alter it since the small method’s purpose (alteration or querying) should be readily apparent. Add to that the fact that other people I collaborate with at times seem not to be a fan of this practice, and the reasons to do it have all kind of slipped away for me except for the “I’ve always done it that way” reason, which I abhor.

The lesson here for me and hopefully for anyone reading is that every now and then, it’s a good idea to examine something you do out of pure habit and decide whether that thing makes sense any longer. The fact that something was once a good idea doesn’t mean that it always will be.

By

There is No Such Thing as Waterfall

Don’t try to follow the waterfall model. That’s impossible. Instead, only try to realize the truth: there is no waterfall.

Waterfall In Practice Today

I often see people discuss, argue and debate the best approach or type of approach to software development. Most commonly, this involves a discussion of the merits of iterative and/or agile development, versus the “more traditional waterfall approach.” What you don’t see nearly as commonly, but do see every now and then is how the whole “waterfall” approach is based on a pretty fundamental misunderstanding, wherein the man (Royce) who coined the term and created the iconic diagram of the model was holding it up as a strawman to say (paraphrased) “this is how to fail at writing software — what you should do instead is iterate.” Any number of agile proponents may point out things like this, and it isn’t too hard to make the case that the waterfall development methodology is flawed and can be problematic. But, I want to make the case that it doesn’t even actually exist.

I saw a fantastic tweet earlier from Energized Work that said “Waterfall is typically 6 months of ‘fun up front’, followed by ‘steady as she goes’ eventually ramping up to ‘ramming speed’.” This is perfect because it indicates the fundamental boom and bust, masochistic cycle of this approach. There is a “requirements phase” and a “design phase”, which both amount basically to “do nothing for a while.” This is actually pretty relaxing (although frustrating for ambitious and conscientious developers, as this is usually unstructured-unstructured time). After a few weeks or months or whatever of thumb-twiddling, development starts, and life is normal for a few weeks or months while the “chuck it over the wall” deadline is too far off for there to be any sense of how late or bad the software will turn out to be. Eventually, though, everyone starts to figure out that the answers to those questions are “very” and “very”, respectively, and the project kicks into a death march state and “rams” through the deadline over budget, under-featured, and behind schedule, eventually wheezing to some completion point that miraculously staves off lawyers and lawsuits for the time being.

This is so psychically exhausting to the team that the only possible option is 3 months of doing nothing, er, excuse me, requirements and design phase for the next project, to rest. After working 60 hour weeks and weekends for a few weeks or months, the developers on the team look forward to these “phases” where they come in at 10 AM, leave at 4 PM, and sit around writing “shall” a lot, drawing on whiteboards, and googling to see if UML has reached code generation Shangri La while they were imprisoned in their cubicles for the last few months. Only after this semi-vacation are they ready to start the whole chilling saga again (at least the ones that haven’t moved on to greener pastures).

Diving into the Waterfall

So, what actually happens during these phases, in a more detailed sense, and what right have I to be so dismissive of requirements and design phases as non-work? Well, I have experienced what I’m describing firsthand on any number of occasions, and found that most of my time is spent waiting and trying to invent useful things to do (if not supporting the previous release), but I realize that anecdotal evidence is not universally compelling. What I do consider compelling is that after these weeks or months of “work” you have exactly nothing that will ever be delivered to your end-users. Oh, you’ve spent several months planning, but when was the last time planning something was worked at anywhere near as much as actually doing something? When you were a high school or college kid and given class time to make “idea webs” and “outlines” for essays, how often was that time spent diligently working, and how often was that time spent planning what to do next weekend? It wasn’t until actual essay writing time that you stopped screwing around. And, while we like to think that we’ve grown up a lot, there is a very natural tendency to regress developmentally when confronted with weeks of time after which no real deliverable is expected. For more on this, see that ambitious side project you’ve really been meaning to get back into.

But the interesting part of this isn’t that people will tend to relax instead of “plan for months” but what happens when development actually starts. Development starts when the team “exits” the “design phase” on that magical day when the system is declared “fully designed” and coding can begin. In a way, it’s like Christmas. And the way it’s like Christmas is that the effect is completely ruined in the first minute that the children tear into the presents and start making a mess. The beautiful design and requirements become obsolete the second the first developer’s first finger touches the first key to add the first character to the first line of code. It’s inevitable.

During the “coding phase”, the developers constantly go to the architect/project manager/lead and say “what about when X happens — there’s nothing in here about that.” They are then given an answer and, if anyone has time, the various SDLC documents are dutifully updated accordingly. So, developers write code and expose issues, at which time requirements and design are revisited. These small cycles, iterations, if you will, continue throughout the development phase, and on into the testing phase, at which time they become more expensive, but still happen routinely. Now, those are just small things that were omitted in spite of months of designing under the assumption of prescience — for the big ones, something called a “change request” is required. This is the same thing, but with more emails and word documents and anger because it’s a bigger alteration and thus iteration. But, in either case, things happen, requirements change, design is revisited, code is altered.

Whoah. Let’s think about that. Once the coding starts, the requirements and design artifacts are routinely changed and updated, the code changes to reflect that, and then incremental testing is (hopefully) done. That doesn’t sound like a “waterfall” at all. That sounds pretty iterative. The only thing missing is involving the stakeholders. So, when you get right down to it, “Waterfall” is just dysfunctional iterative development where the first two months (or whatever) are spent screwing around before getting to work, where iterations are undertaken and approved internally without stakeholder feedback, and where delivery is generally late and over budget, probably by an amount in the neighborhood of the amount of time spent screwing around in the beginning.

The Take-Away

My point here isn’t to try to persuade anyone to alter their software development approach, but rather to try to clarify the discussion somewhat. What we call “waterfall” is really just a specifically awkward and inefficient iterative approach (the closest thing to an exception I can think of is big government projects where it actually is possible to freeze requirements for months or years, but then again, these fail at an absolutely incredible rate and will still be subject to the “oh yeah, we never considered that situation” changes, if not the external change requests). So there isn’t “iterative approach” and “waterfall approach”, but rather “iterative approach” and “iterative approach where you procrastinate and scramble at the end.” And, I don’t know about you, but I was on the wrong side of that fence enough times as a college kid that I have no taste left for it.

By

The Perils of Free Time at Work

Profitable Free Time

If you’ve ever been invited to interview at Google or you simply keep up with the way it does things, you’re probably familiar with Google’s “20 percent time”. According to HowStuffWorks (of all places):

Another famous benefit of working at Google is the 20 percent time program. Google allows its employees to use up to 20 percent of their work week at Google to pursue special projects. That means for every standard work week, employees can take a full day to work on a project unrelated to their normal workload. Google claims that many of their products in Google Labs started out as pet projects in the 20 percent time program.

In other words, you can spend 4 days per week helping the company’s bottom line and one day a week chasing rainbows and implementing unicorns. The big picture philosophy is that the unbridled freedom to pursue one’s interests will actually result in profitable things for the company in the long run. This is a good example of something that will tend to keep programmers around by promoting mastery, autonomy, and purpose.

We tend to think of this perk as characteristic of progressive employer startups where you imagine beer in the fridge, air hockey tables and Playstations in the break room, and various other perks designed to make it comfortable to work 70+ hour weeks in the “work hard, play-hard” culture. But interestingly, this idea went all the way back to 3M in 1948:

3M launched the 15 percent program in 1948. If it seems radical now, think of how it played as post-war America was suiting up and going to the office, with rigid hierarchies and increasingly defined work and home roles. But it was also a logical next step. All those early years in the red taught 3M a key lesson: Innovate or die, an ethos the company has carried dutifully into the 21st century.

So, for over half a century, successful companies (or at least a narrow subset of them) have been successful by allowing their employees a portion of their paid time to do whatever they please, within reason. That seems to make a good case for this practice and for a developer who finds himself in this position to be happy.

We Interrupt this Post to Bring You A Lesson From Donald Rumsfeld

Every so often politicians or other public figures say things that actually make sense, but go down in sound byte lore as debacles. John Kerry’s “I voted for the bill before I voted against it” comes to mind, but a very unfortunately misunderstood one, in my opinion, is this one from Donald Rumsfeld:

[T]here are known knowns; there are things we know that we know. There are known unknowns; that is to say there are things that, we now know we don’t know. But there are also unknown unknowns – there are things we do not know, we don’t know.

Admittedly, this is somewhat of a rhetorical mind-twister, but when you think about what he’s saying, not only does it make sense, but it is fairly profound. For example, consider your commute to work. When you go to your car, you know what kind of car you drive. That is a “known-known”. As you get in your car to go to work, it may take you 30-50 minutes depending on traffic. The traffic is a “known-unknown” in that you know it will exist but you don’t know what it will be like and you are able to plan accordingly (i.e. you allow 50 minutes even though it may take you less). But what if, while getting in your car, someone told you that you wouldn’t be at work for about 4 hours? Maybe a fender-bender? Maybe a family member calls you with an emergency? Who knows… these are the “unknown-unknowns” — the random occurrence in life for which one simply can’t plan.

The reason that I mention this is that I’d like to introduce a similar taxonomy for free as it relates to our professional lives, and I’ll ask you to bear with me the way I think you should bear with Rummy.

Structured-Unstructured Time or Unstructured-Unstructured Time

Let’s adopt these designations for time at the office. The simplest way to talk about time is what I’ll call “structured-structured time”. This is time where your boss has told you to work on X and you are dutifully working X. Google/3M’s “20/15 percent time”, respectively, is an example of what I’ll call “structured-unstructured time.” This is time that the organization has specifically set aside to let employees pursue whims and fancies while not accountable to normal scheduling consideration. It is unstructured time with a purpose. The final type(**) of time is what I’ll call “unstructured-unstructured” time, and this is time that you spend doing nothing directly productive for the company but without the company having planned on you doing that. The most obvious example I can think of is if your boss tells you to go make 10,000 copies, the copy machine is broken, and you have no other assignments. At this point you have unstructured-unstructured time where you might do anything from taking it upon yourself to fix the copy machine to taking a nap in the break room.

Making this distinction may seem like semantic nitpicking, but I believe that it’s important. I further believe that unstructured-unstructured time chips away at morale even as the right amount of structured-unstructured makes it soar. With structured-unstructured time, the company is essentially saying “we believe in you and, in fact, we believe so much in you that we’re willing to take a 20 percent hit to your productivity on the gamble that you doing what interests you will prove to be profitable.” Having someone place that kind of confidence in you is both flattering and highly motivating. The allure of doing things that interest you combined with living up to the expectations will make you work just as hard during structured-unstructured time as you would at any other time, if not harder. I’ve never had this perk, but I can say without a doubt that this is the day I’d be most likely to put in 12 or 13 hours at the office.

Contrast this with unstructured-unstructured time and the message conveyed to you by the organization. Here the company is basically saying the opposite: “we value your work so little that we’d rather pay you to do nothing than distract someone much more important than you with figuring out what you should do.” Have you ever been a new hire and twiddled your thumbs for a week or even a month, perusing stuff on the local intranet while harried employees shuffled busily around you? Ever needed input or approval from a more senior team member and spent the whole day clicking through reddit or slashdot? How about telling a manager that you have nothing you can work on and hearing, “that’s okay — don’t worry about it — you deserve some downtime.”

The difference in how this time is perceived is plain: “we’re going to carve out time because we are excited to see just how much you can do” versus “we don’t really give a crap what you do.”

But Don’t People Like Free Time and Freedom From Pressure?

You would think that everyone would appreciate the unstructured-unstructured time when it came their way, but I believe that this largely isn’t the case. Most people start to get nervous that they’re being judged if they have a lot of this time. Many (myself included) start to invent side-projects or things that they think will help the company as ways to fill the vacuum and hopefully prove themselves, but this plan often backfires as organizations that can’t keep employees busy and challenged are unlikely to be the kind of organizations that value this “self-starter” type behavior and the projects it produces. So the natural tendency to flourish during unstructured time becomes somewhat corrupted as it is accompanied by subtle feelings of purposelessness and paranoia. About the only people immune to this effect are Dead-Sea, checked out types who are generally aware consciously or subconsciously that being productive or not doesn’t affect promotions or status anyway so they might as well catch up on their reading.

I believe there is a natural progression from starting off trying to be industrious and opportunistically create structured-unstructured time during unstructured-unstructured time to giving up and embarking on self improvement/training missions during that time to sending out resumes during that time. So I would caution you that if you’re a manager or lead, make sure that you’re not letting your team flail around without structure this way. If you think they’re out of tasks, assign them some. Make time for them. Or, if nothing else, at least tell them what google/3M tell them — we value your ingenuity, so this is actually planned (not that I’m advocating lying, but come on – you can figure out a good way to spin this). Just don’t think that you’re doing team members a favor or taking it easy on them by not giving them anything to do.

If you’re a developer with this kind of time on your hands, talk to your manager and tell him or her how you feel. Formulate a plan for how to minimize or productively use this time early on, before it gets out of hand and makes you restless for Career Builder. Other people do get busy and understandably so, and so it may require gentle prodding to get yourself out of this position.

But regardless of who you are, I advocate that you do your best to provide some structure to unstructured time, both for yourself and for those around you. Working with a purpose is something that will make just about anyone happier than the alternative.

** As an aside, both Rummy and I left out a fourth option, not closing the set. In his case, it would be the “unknown-known” and in my case, it would be “unstructured-structured time”. In both cases, this assumes utter incompetence on the part of the first person in the story – an “unknown known” would mean some obvious fact that was being missed or ignored. In the case of an organization, “unstructured-structure” would mean that team members had effectively mutinied under incompetent management and created some sort of ad-hoc structure to get work done in spite of no official direction. This is a potentially fascinating topic that I may later come back to, although it’s a little far afield for the usual subject matter of this blog.

By

Multitasking that Actually Works – Suggestions Requested

There are a lot of ways in which you can do two or more things at once and succeed only in doing them very poorly, so I’m always in search of a way to do multiple things at once but to actually get value out of both things. As a programmer with an ever-longer commute, a tendency to work well over 40 hours per week, and a wide range of interests, my time is at a premium.

Two things that have come to be indispensable for me are listening to developer podcasts (.NET Rocks, Deep Fried Bytes, etc) while I drive and watching Pluralsight videos while I run on machines at the gym. Podcasts are made for audio consumption and I think probably invented more or less with commutes in mind, so this is kind of a no-brainer fit that infuses some learning and professional interest into otherwise dead time (although I might also start listening to fiction audio books, potentially including classic literature).

Watching Pluralsight while jogging is made possible with the advent of the nice smartphone and/or tablet, but is a bit interesting nonetheless. I find it works best on elliptical machines (pictured on the left) where I’m not bouncing much or making a ton of noise and can lay my phone sideways and facing me. I don’t think this is a workable strategy for jogging outdoors or even on a treadmill, so having a gym at my office with ellipticals is kind of essential for this.

These two productive examples of multi-tasking have inspired me to try to think of other ways to maximize my time. There are some important criteria, however. The multi-tasking must not detract significantly from either task so “catching up on sleep at work” and “watching TV while listening to the radio” don’t make the cut. Additionally, at least one task must be non-trivial, so “avoiding bad music while sleeping” also does not make the cut. And, finally, I’m not interested in tasks that depend on something being inefficient, so “catching up on my RSS reader while waiting for code to compile” is no good since what I ought to be doing is figuring out a way not to be blocked by excessive compile time (I realize one could make a philosophical argument about my commute being inefficient, but I’m asking for some non-rigorous leeway for case by case application here).

This actually isn’t trivial. Most tasks that are worth doing require the lion’s share of your attention and juggling two or more often ensures that you do a haphazard job at all of them, and my life already seems sort of hyper-optimized. I work through lunch, I don’t sleep all that much, and double up wherever I can. So, additional ways to make this happen are real gems.

Anyone have additional suggestions or things that they do to make themselves more efficient? Your feedback is definitely welcome and solicited in comment form!