DaedTech

Stories about Software

By

Notes On Job Hopping: You Should Probably Job Hop

Last week, in a post that either broke the Google+ counter mechanism or blew up there in very isolated fashion, I talked about job hopping and meandered my way to my own personal conclusion as to whether it might be construed as unethical. I don’t think it is. Today I’d like to talk a bit about practical ramifications for the individual, as promised at the end of that post.

The title of this sounds like link bait, but that’s not really more my goal. I wrote the post without giving it a title and then reread it. This title was the only thing that I could think to call it, as I realized, “wow, I guess I’m recommending that people job hop.”

A Scarlet J for Job Hopping

Conventional wisdom holds that there is some kind of sliding scale between loyal employee and job hopper and that you get into a bit of a danger zone if you move around too much. Everyone can really imagine the scenario: you’re in an interview and the interviewer awkwardly asks, “since you seem to switch jobs a lot, how do I know you’d stay here long enough for the hire to be worthwhile?”

And you’d fire a blank at this point, you job hopper, you. You’d be unable to convince this hiring authority to take a chance on you. And what’s worse is that this would be a common reaction, getting you turned down in interviews and probably even tossed from a number of resume piles in the first place. You have a scarlet J on your chest, and nothing but time will remove it.

ScarletJ

If, on the other hand, you opt for the loyalty route and keep the number of job changes pretty minimal, you’ll have no trouble finding your next job. Without that scarlet J, the townsfolk are more than happy to give you an offer letter. They figure that if you’ve spent a decade at Initech, you’re likely to spend the next decade working for them.

And, really, what could be better for them? Everyone knows that turnover is expensive. There are training periods and natural inefficiencies to that concept; it’s just a question of how bad. If Bob and all of his tribal knowledge walk out the door, it can be a real problem for a group. So companies look for unfaithful employees, but not employees that are unfaithful too often — otherwise the awkward question arises: “if he’ll cheat on his old company with me, how do I know he won’t cheat on me with Initrode?”

Apparently, there’s a line to straddle here if your eye starts wandering. Job transitions are a finite resource, so you’d better make them count. But that’s not exactly a reason not to job hop, but a reason not to do it too often. It’s the difference between having a few cold ones on the weekend and being Kieth Richards, and I’ll come back to this point later. But, in the meantime, let’s look at some reasons not to change jobs.

Inertia Makes You Want to Stay

One of the biggest reasons people stay at a job is simple inertia. I’m listing that first because I suspect it’s probably the most common reason.

Even though a lot of people don’t exactly love their jobs, looking for a new job is a hassle. You have to go through the interview process, which can be absurd in and of itself. On top of that, it typically means awkward absences from work, fibbing to an employer, getting dressed up, keeping weird hours and putting in a lot of work. Job searching can be a full-time job in and of itself, and the prospective job seeker already has a full time job.

And if job searching weren’t enough of a hassle, there’s a whole slew of other issues as well. You’re trading what you know and feel comfortable with (often) for the unfamiliar. You’re leaving behind a crew of friends and associates. And you’re giving up your seniority to become the new guy. And none of that is easy. Even if you decide in the abstract that you’re willing to do all of that in the abstract, it’s likely that you’ll put it off a few more weeks/months — and keep putting it off.

Job Satisfaction Also Keeps You Around

Another common and more uplifting reason to stay at a position is due to a high rate of job satisfaction. I think this is less common than simple inertia, but it’s certainly out there.

Perhaps you spent the early part of your career doing help desk support and all you ever wanted was a shot at uninterrupted programming in a R&D outfit, and now you have that. Maybe you’ve always wanted to work for Microsoft or Facebook or something, and now you’re there.

You’d pass on more lucrative offers or offers of more responsibility simply because you really want to be doing what you’re doing, day in and day out. Genuinely loving one’s job and the work there is certainly a reason not to job hop.

And, Maybe Company Loyalty, But Probably Not These Days

I think that a decreasingly common reason for staying put is loyalty to a company. I observe this to a degree in the boomer set, but it’s not common among gen-Xers and is nearly nonexistent among millennials. This is a desire to stick it out and do right by a company instead of leaving it high and dry.

It may take on the form of the abstract loyalty to the company itself, or it may be loyalty to a boss and/or coworkers. (“I’d hate to leave now, they’d be so screwed if I took off that I can’t do it to them.”) I personally view this as a noble, albeit somewhat quixotic, sentiment, tinged with a form of spotlight effect bias. I think we tend to radically overvalue how high and dry an organization would be without our services. Businesses are remarkably good at lurching along for a while, even when understaffed or piloted by incompetents.

And Then, There Are the Golden Handcuffs To Keep You Around

Rounding out the field of reasons that I’ll mention is a more specialized and less sympathetic form of inertial (and perhaps even loving your job), which is the golden handcuffs. You’re an Expert Beginner or the “residue” in the Dead Sea Effect, and your company drastically overvalues you both in terms of responsibility and pay. To put it bluntly, you stay because you have no choice — you have a relatively toxic codependent relationship with your employer.

There are certainly other conceivable reasons to stay at a job, but I think that you might loosely categorize them into these four buckets and cover the vast majority of rationales that people would cite. So if these are the reasons to stay, what are the reasons to go? Why does jumping from job to job make sense?

Or Should I Go?  Let’s First Talk Money

First off, let’s talk money. If you stay in place at a run-of-the-mill job, what probably happens is that every year you get some kind of three percent-ish COLA. Every five years or so, you get a promotion and a nice kick, like five to ten percent.

If, on the other hand, you move jobs, you get that five to ten percent kick (at least) each time you move. So let’s follow the trajectory of two people that start out making 40K per year out of college as programmers: one who hops every two years and one who stays loyal.

Let’s assume that the hopper doesn’t get COLAs because of when he’s hired at each position. We’ll just give him ten percent kicks every two years, while his loyal peer gets three percent COLAs every year as well as the ten percent kicks. The loyal guy is making 61.3K at the end of ten years, while his job-hopping friend is making 64.4K. If we were to add in the COLAs for the hopper, because he timed it right, that balloons to 74.7, which is almost 25% more than his friend.

Neither of those salaries may seem huge, especially given all of the turmoil in the hopper’s life, but consider that for the rest of your career, there’s no bigger determining factor of your salary than your previous salary, and consider the principle of compound interest. Even assuming that after year ten both people in this thought experiment make the exact same career moves, the difference between their salaries and net worth will only continue to widen for the rest of their lives. It pays to job hop. Literally.

Career Advancement is Also Better for Job Hopping

In fact, I might argue that the case I just made above is actually somewhat muted because of another job-hopping-related consideration: career advancement.

Before, we were just talking about what probably amounts to token promotions — the loyal guy was “software engineer III” after ten years, while his hopper friend was now “software engineer V.” But here’s another thing that happens when you hop: you tend to accumulate more impressive-sounding titles, kicking off a chicken-egg kind of scenario about qualification and title. What I mean is that you don’t just number positions like job shifts, but you start to rack up qualifiers for title like “senior” or “lead” or “principal.”

So now our two subjects are a software engineer making 60K and a lead software engineer making 75K, respectively. Which one of these do you think is likely to get promoted to management or architect first? Done right, job hopping earns you better pay and better titles, which earn you still more pay.

Job Hopping Lets You Control Improvement of Your Circumstances

Related to this skipping around for better circumstances is the middling corporate narrative that the job hopper is escaping — specifically one of “dues paying.” For a bit of background on this concept as it relates to programmers, take a read through section 5, “Career Development,” in Michael O. Chruch’s post[since deleted] about what programmers want. Dues-paying cultures are ones in which advancement is determined not by merit but by some kind of predetermined average and set of canned expectations.

For instance, if you hear things from companies like, “we don’t hire lead architects — we only promote from within to that position,” or, “we don’t offer development promotions more than once every three years,” you have a dues-paying culture on your hand. Call it what you will, but this is essentially a policy designed to prevent the mediocre, tenured natives from getting restless.

It seems insanely childish and petty, yes. But I have personally witnessed plenty of cases where person X with ten years of time with a company had a hissy fit because someone got to engineer level IV in three years when it took person X four years to get there. Enough tantrums like that and promotion governors are slapped on the engine of advancement at the company, and dues-paying periods become official.

But this need not concern the job hopper, who won’t be around long enough to play this game anyway. This is a win on two fronts for him. It’s an obvious win because he promotes himself within a year or two instead of waiting three or four for the HR matrix to catch up. He also avoids the endowment effect of earning his way past the dues-paying rope.

In other words, if he did stay around long enough to ‘earn’ the coveted lead architect title, he’d overvalue it and stick around even longer to savor it because he’d be thinking subconsciously, “being a lead architect at this awesome company is truly amazing or else I wouldn’t have twiddled my thumbs all these years waiting for it.” He’d be a lot more likely to stick around for the even more coveted (at that organization) “super lead architect” position that one can only ‘earn’ after another four years.

Job Hoppers Don’t Have to Deal as Much or as Long with Expert Beginners

Speaking of empty titles, there is another powerful argument in favor of job hopping: avoiding and minimizing interaction with Expert Beginners. On the more obvious front, jumping around helps you avoid becoming an Expert Beginner since you can’t build seniority capital of questionable value to use in lieu of well-reasoned arguments or genuine skill. If you’re bouncing around every year or two, you can’t start arguments with “I’ve been here for 20 years, and blah, blah, blah.”

But a willingness to job hop also provides you with an exit strategy for being confronted with Expert Beginners. If you start at a place and find some weird internal framework or a nasty, amorphous blob of architecture and the ranking ‘experts’ don’t seem to see it as a problem, you can just move on. Your stays will be longer at places that lack Expert Beginnerism in their cultures and shorter at places with particularly nasty or dense Expert Beginners.

But whatever the case may be, you as a job hopper will be the water evaporating in Bruce Webster’s metaphor, refusing to put up with organizational stupidity.

And Job Hoppers Avoid Stagnation

And putting up with organizational stupidity is, in fact, something of a career hazard. Job hopping gives you a sort of career cross-pollination that hanging around at the same place for 20 years does not, which makes you a lot more marketable.

If you work somewhere that has the “Enterprise Framework,” it’s likely you’ll spend years getting to know and understand how some weird, proprietary, tangled mess of code works in an incredible amount of detail. But in the market at large, this knowledge is nearly useless. It only holds value internally within that organization.

And, what’s more, if you have a sunk intellectual property cost at an organization in some gargantuan system written in, say, Java, you’re going to be pretty unlikely to pick up new languages and frameworks as they come along. Your career will be frozen in amber while you work at such a place. There are certain types of organizations, such as consultancies, where this is minimized. But if you doubt the effect, ask yourself how many people out there are cranking out stuff in Winforms or even VB6.

A Recap of the Pros and Cons

We can summarize the pro arguments for job hopping as money; advancement; avoiding mediocre, dues-paying cultures; avoiding Expert Beginners and organizational stupidity; and being marketable.

On the other side, we can summarize the con arguments for job hopping by tagging them as inertia, satisfaction, loyalty and golden handcuffs.

You’ll notice that I didn’t mention the stigma in either category, even though that’s ostensibly a clear negative. (I will return once again to the stigma angle in the third and final post in this series that addresses the future of job hopping). This is because I view the stigma as neutral and a simple matter of market realpolitik.

The Stigma Doesn’t Really Matter

When are you most likely to be branded with the scarlet J — scratch that — when is the only time that you’ll be branded with that scarlet J? Obviously, while you’re applying and interviewing.

You’ll be working at Initech and considering a switch to Initrode, and Initrode takes a pass on you because you seem to skip around too much. So you just keep working at Initech and put in another year or two to let the stigma fade.

As long as you don’t quit (or get laid off) without something else lined up, the job-hopper stigma really doesn’t matter. It happens when it happens, and you actually have a peek-ahead option to find out that it’s about to happen but without dire consequences (again, assuming you aren’t laid off and are generally competent).

And really, this makes a certain kind of sense. I have, in the past, been told to stay put in a situation I didn’t like for fear of acquiring a scarlet J.

People were advising me to stay in a situation in which I was unhappy because if I got out of that situation I might later be unhappy again and this time unable to move. Or, in other words, I should remain definitely unhappy now to avoid possibly being unhappy later. That strikes me as like sitting at home with a 105 degree fever because the ambulance might crash on the way to the hospital and put my health in jeopardy. The stigma argument seems actually to be something of a non-starter since, if it happens, you can just wait it out.

So, Should You Job Hop?

So, on to the million dollar question: should you job hop? Unless you’re happy where you are, I don’t see how the answer can be anything but “yes.”

The “no” arguments all involve personal valuations — with the exception of “golden handcuffs,” which really just means that job hopping is impossible because you missed that boat. Are you too busy with family to conduct a job search? Do you really like your coworkers and working environment? Do you love the work that you’re doing right now? And do you really love the company? I can’t lobby for personal decisions like that in people’s lives, and there is certainly more to life than career advancement, money, and responsibility.

But in terms of objective considerations like money, position and title, there’s really no argument. Job hopping will advance you more quickly through the ranks to better titles, paychecks, and career opportunities in general. You will have more breadth of experience, more industry contacts, more marketable skills, and more frank and honest valuations of the worth of your labor.

Companies are generally optimized to minimize turnover, and if you want a path that differs from steady, slow, measured advancement, staying in one place isn’t in your best interests. Should you job hop? I say absolutely — as often as your personal life and happiness will allow, and as long as you manage to avoid the scarlet J.

I’d imagine that at some points in your career you’ll settle in for a longer stay than others, and perhaps eventually you’ll find a calling to ride out the rest of your career in a place. But I think that you ought to spend your career always ready to trade up or to change your scenery as often as necessary to keep you moving toward your goals, whatever they may be.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

YAGNI: YAGNI

A while back, I wrote a post in which I talked about offering too much code and too many options to other potential users of your code. A discussion emerged in the comments roughly surrounding the merits of the design heuristic affectionately known as “YAGNI,” or “you ain’t gonna need it.”

In a vacuum, this aphorism seems to be the kind of advice you give a chronic hoarder: “come on, just throw out 12 of those combs — it’s not like you’re going to need them.” So what does it mean, exactly, in the context of programming, and is it good advice in general? After all, if you applied the advice that you’d give a hoarder to someone who wasn’t, you might be telling them to throw out their only comb, which wouldn’t make a lot of sense.

The Motivation for YAGNI

As best I understand, the YAGNI principle is one of the tenets of Extreme Programming (XP), an agile development approach that emerged in the 1990s and stood in stark contrast to the so-called “waterfall” or big-design-up-front approach to software projects. One of the core principles of the (generally unsuccessful) waterfall approach is a quixotic attempt to figure out just about every detail of the code to be written before writing it, and then, afterward, to actually write the code.

I personally believe this silliness is the result of misguided attempts to mimic the behavior of an Industrial Revolution-style assembly line in which the engineers (generally software architects) do all the thinking and planning so that the mindless drones putting the pieces together (developers) don’t have to hurt their brains thinking. Obviously, in an industry of knowledge workers, this is silly … but I digress.

YAGNI as a concept seems well suited to address the problematic tendency of waterfall development to generate massive amounts of useless code and other artifacts. Instead, with YAGNI (and other agile principles) you deliver features early, using the simplest possible implementation, and then you add more and refactor as you go.

YAGNI on a Smaller Scale

But YAGNI also has a smaller scale design component as well, which is evident in my earlier post. Some developers have a tendency to code up classes and add extra methods that they think might be useful to themselves or others at some later point in time. This is often referred to as “gold plating,” and I think the behavior is based largely on the fact that this is often a good idea in day-to-day life.

“As long as I’m changing the lightbulb in this ceiling lamp, I may as well dust and do a little cleaning while I’m up here.”

Or perhaps:

“as long as I’m cooking tonight, I might as well make extra so that I have leftovers and can save time tomorrow.”

But the devil is in the details with speculative value propositions. In the first example, clear value is being provided with the extra task (cleaning). In the second, the value is speculative, but the odds of realization are high and the marginal cost is low. If you’re going to the effort of making tuna casserole, scaling up the ingredients is negligible in terms of effort and provides a very likely time savings tomorrow.

But doesn’t that apply to code? I mean, adding that GetFoo() method will take only a second and it might be useful later.

Well, consider that the planning lifespan for code is different than casserole. With the casserole, you have a definite timeframe in mind — some time in the next few nights — whereas with code, you do not. The timeframe is “down the line.” In the meantime, that code sits in the fridge of your application, aging poorly as people forget its intended purpose.

You wind up with a code-fridge full of containers with goop in them that doesn’t exactly smell bad, but isn’t labeled with a date and you aren’t sure of its origin. And I don’t know about you, but if I were to encounter a situation like that, the reaction would be “I don’t feel like risking it, I’m ordering Chinese.” And, “nope, I’m out of here,” isn’t a good feeling to have when you open your application’s code.

Does YAGNI Always Apply?

So there is an overriding design principle YAGNI, and there is a more localized version — both of which seem to be reactions toward a tendency to comically over-plan and a tendency to gold plate locally, respectively. But does this advice always hold up, reactions notwithstanding?

I’m not so sure. I mean, let’s say that you’re starting on a new .NET web application and you need to test that it displays “hello world.” The simplest thing that could work is F5 and inspection. But now let’s say that you have to give it to a tester. The simplest thing is to take the compiled output and manually copy it to a server. That’s certainly simpler than setting up some kind of automated publish or continuous deployment scenario. And now you’re in sort of a loop, because at what point is this XCopy deploy ever not going to be the simplest thing that could work for deploying? What’s the tipping point?

Getting Away from Sloganeering

Now I’m sure that someone is primed to comment that it’s just a matter of how you define requirements and how stringent you are about quality. “Well, it’s the simplest thing that could possibly work but keeping the overall goals in mind of the project and assuming a baseline of quality and” — gotta cut you off there, because that’s not YAGNI. That’s WITSTTCPWBKTOGIMOTPAAABOQA, and that’s a mouthful.

It’s a mouthful because there’s suddenly nuance.

The software development world is full of metaphorical hoarders, to go back to the theme of the first paragraph. They’re serial planners and pleasers that want a fridge full of every imaginable code casserole they or their guests might ever need. YAGNI is a great mantra for these people and for snapping them out of it. When you go sit to pair or work with someone who asks you for advice on some method, it’s a good reality check to say, “dude, YAGNI, so stop writing it.”

But once that person gets YAGNI — really groks it by adopting a practice like TDD or by developing a knack for getting things working and refactoring to refinement — there are diminishing returns to this piece of advice. While it might still occasionally serve as a reminder/wake-up call, it starts to become a possibly counterproductive oversimplification. I’d say that this is a great, pithy phrase to tack up on the wall of your shop as a reminder, especially if there’s been an over-planning and gold-plating trend historically. But that if you do that, beware local maxima.

By

Let’s Take This Thing Apart

My garbage disposal stopped working recently. I turned it on and could hear only the strained hum of the motor, so naturally I proceeded straight to the internet. I usually do that to see what others think of a situation, though my guess was that something was jamming up the works and preventing the motor from spinning. I went and got a sturdy piece of wood from the scrap pile in my work room and tried to force the disposal to turn, but no dice. That was a few days ago.

Tonight, I decided I’d had enough of using a strainer to gather up food debris and throw it away every day or two. When I had trouble finding work out of college during the dot-com bubble burst, I scraped by and lived in a tiny studio apartment in Chicago with no garbage disposal and one of those freezers that would frost over with ice. I care to repeat nothing of that experience. So I killed the power to the disposal at the circuit breaker and started popping off the plumbing and electrical connections. I hauled the thing down to my basement and set to work on it. As I dealt with the tedious flat-head screw removal and O-ring jockeying involved, my mind began to wander a bit.

One thing that ran through my head was that it had never occurred to me to call someone about this problem, and I started to wonder why not. I’ve never taken apart a garbage disposal before, and I don’t have the extra PVC pipe laying around to hook up as a replacement for the disposal in a worst-case scenario. If I had failed, there would have been no kitchen sink until I succeeded (which would be at least a few days, since I’m at work tomorrow and then taking a brief trip out of town). What’s my deal that I’d just say, “well, let’s start popping screws off and see what happens?”

What I realized is this: I think this mindset is at the core of the engineering mind and drives at what makes programmers successful (and/or driven workaholics). There’s a desire to deconstruct and to understand the core functionality of things. There’s a desire to look at things logically and to understand the procedures for disassembling and reassembling them. There’s a desire to know how the world works, and furthermore there’s a bit of underlying confidence. When it come to operating in that world, you’ll figure something out.

It’s my opinion that when you look at code, it’s important to think that way as well. Why is this framework call slow? Well, let’s disassemble it and find out. Is this method call necessary? Let’s delete it and see what breaks. Can we improve the structure here? Let’s roll up our sleeves and start refactoring. My attitude toward my garbage disposal is the same as my attitude toward code, and I think it’s a decent operating paradigm. Rather than calling in someone when I hit a snag, I use it as a learning opportunity, put in some extra hours, get it done and improve my capabilities. I did that tonight. I’d chiseled out a stuck piece of plastic and hooked everything up for a working disposal by 1 AM. I do things like that with code all the time (often at similar hours).

Of course, there’s a dark side to this. If while working on my plumbing I bend a supply line and somehow cause a leak back behind the drywall, I have pretty big problems. If I have to cut off water to the house and then go to work, it might be “get a hotel room and call a contractor” time. If you don’t know when you’re outgunned by life, you run the risk of going from confident and inquisitive to foolish or at least a bit reckless. I’ve probably crossed that line a few times, particularly in code when I decide I have no use for some legacy namespace and I’ll just rewrite the whole thing myself over the weekend. But I think that, on the whole, if you take the “let’s pull this thing apart and see where it leads us” approach to programming, you’ll probably have a pretty successful career.

By

Notes on Job Hopping: Millennials and Their Ethics

For those that have been reading my more recent posts, which have typically been about broad-level software design or architecture concerns, I should probably issue a rant alert. This is somewhat of a meandering odyssey through the subject of the current prevalence of job hopping, particularly among the so-called millenial generation.

I thought I might take a whack today at this rather under-discussed subject in the field of software development. It’s not that I think the subject is particularly taboo, especially when discussed in blog comments or discussion forums as opposed to with one’s employer. I just think the more common approach to this subject is sort of quietly to pretend that it applies to people abstractly and not to anyone participating in a given conversation. This is the same way one might approach discussing the “moral degradation of society” — it’s a thing that happens in general, but few people look immediately around themselves and start assigning blame.

So what of job hopping and the job hopper? Is the practice as career threatening as ever it was, or is viewing it that way way a throwback to a rapidly dying age in the time of Developernomics and the developer as “king”? Is jumping around a good way to move up rapidly in title and pay, or is it living on borrowed time during an intense boom cycle in the demand for software development? Are we in a bubble whose bursting could leave the job hoppers among us as the people left standing without a chair when the music stops?

Before considering those questions, however, the ethics of job hopping bears some consideration. If society tends to view job hopping as an unethical practice, then the question of whether it’s a good idea or not becomes somewhat akin to the question of whether cheating on midterms in college is a good idea or not. If you do it and get away with it, the outcome is advantageous. Whether you can live with yourself or not is another matter. But is this a good comparison? Is job hopping similar to cheating?

To answer that question, I’d like to take a rather indirect route. I think it’s going to be necessary to take a brief foray into human history to see how we’ve arrived at the point that the so-called “millenials,” the generation of people age 35 and younger or thereabouts, are the motor that drives the software development world. I’ve seen the millenials called the “me generation,” but I’ve also seen that label applied to baby boomers as well. I’d venture a guess that pretty much every generation in human history has muttered angrily about the next generation in this fashion shortly after screaming at people to leave their collective lawn. “They’re all a bunch of self-involved, always on our lawn, narcissist, blah, blah, blah, ratcha-fratcha kids these days…”

It’s as uninventive as it is emblematic of sweeping generalizations, and if this sort of tiresome rhetoric were trotted out about a gender or racial demographic rather than an age-based one, the speaker would be roundly dismissed as a knuckle-dragging crank. But beneath the vacuous stereotyping and “us versus them” generational pissing matches lie some real and interesting shifting ethical trends and philosophies. And these are the key to understanding the fascinating and subtle shifts in both generational and general outlook toward employment.

Throughout most of human history, choice (about much of anything) was the province of the rich. Even in a relatively progressive society, such as ancient Greece, democracy was all well and good for land-owning, wealthy males. But everyone else was kind of out in the cold. People hunted and farmed, worked as soldiers and artisans, and did any number of things when station in life was largely determined by pragmatism, birth, and a lack of specialization of labor. And so it went pre-Industrial Revolution. Unless you were fortunate enough to be a noble or a man of wisdom, most of your life was pretty well set in place: childhood, apprenticeship/labor, marriage, parenthood, etc.

Even with the Industrial Revolution, things got different more than they got better for the proles. The cycle of “birth-labor-marriage-labor-parenthood-labor-death” just moved indoors. Serfs graduated to wage slaves, but it didn’t afford them a lot of leisure time or social mobility. As time marched onward, things improved in fits and starts from a labor-specialization perspective, but it wasn’t until a couple of world wars took place that the stars aligned for a free-will sea change.

Politics, technology, and and the unionized collective bargaining movement ushered in an interesting time of post-war boom and prosperity following World War II. A generation of people returned from wars, bought cars, moved to suburbs, and created a middle class free from the share-cropping-reminiscent, serf-like conditions that had reigned throughout human history. As they did all of this, they married young, had lots of children, settled down in a regular job and basically did as their parents had as a matter of tradition.

And why not? Cargo cult is what we do. Millions of people don’t currently eat shellfish and certain kinds of meat because doing so thousands of years ago killed people, and religious significance was ascribed to this phenomenon. A lot of our attitudes toward human sexuality were forged in the fires of Medieval outbreaks of syphilis. Even the “early to bed, early to rise” mantra and summer breaks for children so ingrained in our cultures are just vestigial throwbacks to years gone by when most people were farmers. We establish practices that are pragmatic. Then we keep doing them just because.

But the WWII veterans gave birth to a generation that came of age during the 1960s. And, as just about every generation does, this generation began superficially to question the traditions of the last generation while continuing generally to follow them. These baby boomers staged an impressive series of concerts and protests, affected real social policy changes, and then settled back into the comfortable and traditional arrangements known to all generations. But they did so with an important difference: they were the first generation forged in the fires of awareness of first-world, modern choice.

What I mean by that is that for the entirety of human history, people’s lots in life were relatively predetermined. Things like work, marriages, and having lots of children were practical necessities. This only stopped being true for the masses during the post-WWII boom. The “greatest generation” was the first generation that had choice, but the boomers were the first generation to figure out that they had choice. But figuring things like that out doesn’t really go smoothly because of the grip that tradition holds over our instinctive brains.

So the boomers had the luxury of choice and the knowledge of it, to an extent. But the old habits died hard. The expression of that choice was alive in the 1960s and then gradually ran out of steam in the 1970s. Boomers rejected the traditions and trappings of recorded human history, but then, by the 80s, they came around. By and large, they were monogamous parents working steady jobs, in spite of the fact that this arrangement was now purely one of comfort rather than necessity. They could job hop, stay single, and have no children if they chose, and they wouldn’t be adversely affected in the way a farmer would have in any time but modernity.

But even as they were settling down and seeing the light from a traditional perspective, a kind of disillusionment set in. Life is a lot harder in most ways when you don’t have choices about your fate, but strangely easier in others. Once you’re acing the bottom levels of Maslow’s Hierarchy, it becomes a lot easier to think, “if only I had dated more,” or, “I’m fifty and I’ve given half my life to this company.” And, in the modern age of choices, the boomers had the power to do something about it. And so they did.

In their personal lives, they called it quits and left their spouses. In the working world, they embarked on a quest of deregulation and upheaval. In the middle of the 20th century, the corporation had had replaced the small town as the tribal unit of collective identity, as described in The Organization Man. The concept of company loyalty and even existential consistency went out the window as mergers and acquisitions replaced blue chip stocks. The boomers became the “generation of divorce.” Grappling with tradition on one side and choice on the other, they tried to serve both masters and failed with gritty and often tragic consequences.

And so the millenials were the children of this experience. They watched their parents suffer through messy divorces in their personal lives and in their professional lives. Companies to which their parents had given their best years laid them off with a few months of severance and a pat on the butt. Or perhaps their parents were the ones doing the laying off — buying up companies, parceling them up and moving the pieces around. Whether personal or corporate, these divorces were sometimes no-fault, and sometimes all-fault. But they were all the product of heretofore unfamiliar amounts of personal choice and personal freedom. Never before in human history had so many people said, “You know what, I just figured out after 30 years that this isn’t working. So screw it, I’m out of here.”

So returning to the present, I find the notion that millenials harbor feelings of entitlement or narcissism to be preposterous on its face. Millenials don’t feel entitlement — they feel skepticism. They hesitate to commit, and when they do, they commit lightly and make contingency plans. They live with their parents longer rather than committing to the long-term obligation of a mortgage or even a lease. They wait until they’re older to marry and have children rather than wasting their time and affections on starter spouses and doomed relationships. And they job hop. They leave you before you can leave them, which, as we both know, you will sooner or later.

That generally doesn’t sit well with the older generation for the same reasons that the younger generation’s behavior never sits well with the older one. The older generation thinks, “man, I had to go through 20 years of misery before I figured out that I hated my job and your mother, so who are you to think you’re too good for that?” It was probably the same way their parents got angry at them for going to Woodstock instead of settling down and working on the General Motors assembly line right out of high school. Who were they to go out cavorting at concerts when their parents had already been raising a family after fighting in a war at their age?

So we can circle back around to the original questions by dismissing the “millenials are spoiled” canard as a reason to consider modern job hopping unethical. Generational stereotyping won’t cut it. Instead, one has to consider whether some kind of violation of an implied quid pro quo happens. Do job hoppers welch on their end of a bargain, leaving a company that would have stayed loyal to them were the tables turned? I think you’d be hard pressed to make that case. Individuals are capable of loyalty, but organizations are capable of only manufactured and empty bureaucratic loyalty, the logical outcome of which is the kind of tenure policies that organized labor outfits wield like cudgels to shield workers from their own incompetence. Organizations can only be forced into loyalty at metaphorical gunpoint.

Setting aside both the generational ad hominem and the notion that job hopping is somehow unfair to companies, I can only personally conclude that there is nothing unethical about it and that the consideration of whether or not to job hop is purely pragmatic. And really, what else could be concluded? I don’t think that much of anyone would make the case that leaving an organization to pursue a start-up or move across the country is unethical, so the difference between “job leaver” and “job hopper” becomes purely a grayscale matter of degrees.

With the ethics question in the books on my end, I’ll return next time around to discuss the practical ramifications for individuals, as well as the broader picture and what I think it means for organizations and the field of software development going forward. I’ll talk about the concept of free agency, developer cooperation arrangements, and other sort of free-wheeling speculation about the future.

By

Moving Away from State: State–

In 1968, Edsger Dijkstra issued a letter entitled “Go To Statement Considered Harmful,” and the age of structured programming was born. The letter was a call to stop programmers of the time from creating ad-hoc control flow structures using goto statements and to instead use these higher level constructs for manipulating flow through a function (contrary, I think, to the oft-attributed position that “goto is evil”). This gave rise to structured programming because it meant that progress in a method would be more visually trackable.

But I think there’s an interesting underlying concept here that informs a lot of shifts in programming practice. Specifically, I’m referring to the idea that Dijkstra conceived of “goto” as existing at an inappropriate level of abstraction when stacked with concepts like “if” and “while” and “case.” Those latter are elements of logical human reasoning while the former is a matter of procedure for a compiler. Or, to put it another way, the control flow statements tell humans how to read business logic, and the goto tells the compiler how to execute a program.

“State Considered Harmful” seems to be the new trend, ushering in the renaissance of functional programming. Functional programming itself is not a new concept. Lisp has been around for half a century, meaning that it actually predates object-oriented programming. Life always seems to be full of cycles, and this is certainly an example. But there’s more to it than people seeking new solutions in old ideas. The new frontier in faster processing and better computer performance is parallel processing. We can’t fit a whole lot more transistors on a chip, but we can fit more chips in a computer — and design schemes to split the work between them. And in order to do that successfully, it’s necessary to minimize the amount of temporarily stored information, or state.

I’ve found myself headed in that direction almost subconsciously. A lot of the handiest tools push you that way. I’ve always loved the fluent Linq methods in C#, and those generally serve as a relatively painless introduction to functional programming. You find yourself gravitating away from nested loops and local variables in favor of chained calls that express semantically what you want in only a line or two of code. But gravitating toward functional programming style goes beyond just using something like Linq, and it involves favoring chains of methods in which the output is a pure, side-effect-free function of the input.

EmptyGlass

Here have been some of the stops on my own journey away from state.

  1. Elimination of global state. The first step in this journey for me was to realize how odious global state is for the maintainability of an application.
  2. No state variables for communication between methods. Next to go for me were ‘flag’ variables that kept track of things in a class between method calls. As I became more proficient in unit testing, I found this to be a huge headache as it created complicated and brittle test setup, and it was really a pointless crutch anyway.
  3. Immutable > mutable. I’ve blogged about an idea I called “pointless mutability,” but in general I’ve come to favor immutable constructs over mutable ones whenever possible for simplicity.
  4. State isolation — for instance, model objects and domain objects for business logic state and viewmodels/controllers for GUI state. Aside from that, a lot of applications for which I am the architect retain virtually no state information. Services and other application scaffolding types of classes simply have interfaced references to their collaborators, but really keep track of nothing between method calls.
  5. Persistence ignorance — letting the application pretend that its state is storage. For less sophisticated (CRUD-style) applications, I’ve favored scenarios in which most of the code’s state is abstracted into a lower layer of the application and implemented only externally. In other words, if things are simple, let something like a database or file system be your application’s state. Why cache and over-complicate until/unless performance is an issue?

And that’s where I stand now as I write object-oriented code. I am interested in diving more into functional languages, as I’ve only played with them here and there since my undergrad days. It isn’t so much that they’re the new hotness as it is that I find myself heading that way anyway. And if I’m going to do it, I might as well do it consciously and in directed fashion. If and when I do get to do more playing, you can definitely bet that I’ll post about it.