DaedTech

Stories about Software

By

Notes on Job Hopping: Millennials and Their Ethics

For those that have been reading my more recent posts, which have typically been about broad-level software design or architecture concerns, I should probably issue a rant alert. This is somewhat of a meandering odyssey through the subject of the current prevalence of job hopping, particularly among the so-called millenial generation.

I thought I might take a whack today at this rather under-discussed subject in the field of software development. It’s not that I think the subject is particularly taboo, especially when discussed in blog comments or discussion forums as opposed to with one’s employer. I just think the more common approach to this subject is sort of quietly to pretend that it applies to people abstractly and not to anyone participating in a given conversation. This is the same way one might approach discussing the “moral degradation of society” — it’s a thing that happens in general, but few people look immediately around themselves and start assigning blame.

So what of job hopping and the job hopper? Is the practice as career threatening as ever it was, or is viewing it that way way a throwback to a rapidly dying age in the time of Developernomics and the developer as “king”? Is jumping around a good way to move up rapidly in title and pay, or is it living on borrowed time during an intense boom cycle in the demand for software development? Are we in a bubble whose bursting could leave the job hoppers among us as the people left standing without a chair when the music stops?

Before considering those questions, however, the ethics of job hopping bears some consideration. If society tends to view job hopping as an unethical practice, then the question of whether it’s a good idea or not becomes somewhat akin to the question of whether cheating on midterms in college is a good idea or not. If you do it and get away with it, the outcome is advantageous. Whether you can live with yourself or not is another matter. But is this a good comparison? Is job hopping similar to cheating?

To answer that question, I’d like to take a rather indirect route. I think it’s going to be necessary to take a brief foray into human history to see how we’ve arrived at the point that the so-called “millenials,” the generation of people age 35 and younger or thereabouts, are the motor that drives the software development world. I’ve seen the millenials called the “me generation,” but I’ve also seen that label applied to baby boomers as well. I’d venture a guess that pretty much every generation in human history has muttered angrily about the next generation in this fashion shortly after screaming at people to leave their collective lawn. “They’re all a bunch of self-involved, always on our lawn, narcissist, blah, blah, blah, ratcha-fratcha kids these days…”

It’s as uninventive as it is emblematic of sweeping generalizations, and if this sort of tiresome rhetoric were trotted out about a gender or racial demographic rather than an age-based one, the speaker would be roundly dismissed as a knuckle-dragging crank. But beneath the vacuous stereotyping and “us versus them” generational pissing matches lie some real and interesting shifting ethical trends and philosophies. And these are the key to understanding the fascinating and subtle shifts in both generational and general outlook toward employment.

Throughout most of human history, choice (about much of anything) was the province of the rich. Even in a relatively progressive society, such as ancient Greece, democracy was all well and good for land-owning, wealthy males. But everyone else was kind of out in the cold. People hunted and farmed, worked as soldiers and artisans, and did any number of things when station in life was largely determined by pragmatism, birth, and a lack of specialization of labor. And so it went pre-Industrial Revolution. Unless you were fortunate enough to be a noble or a man of wisdom, most of your life was pretty well set in place: childhood, apprenticeship/labor, marriage, parenthood, etc.

Even with the Industrial Revolution, things got different more than they got better for the proles. The cycle of “birth-labor-marriage-labor-parenthood-labor-death” just moved indoors. Serfs graduated to wage slaves, but it didn’t afford them a lot of leisure time or social mobility. As time marched onward, things improved in fits and starts from a labor-specialization perspective, but it wasn’t until a couple of world wars took place that the stars aligned for a free-will sea change.

Politics, technology, and and the unionized collective bargaining movement ushered in an interesting time of post-war boom and prosperity following World War II. A generation of people returned from wars, bought cars, moved to suburbs, and created a middle class free from the share-cropping-reminiscent, serf-like conditions that had reigned throughout human history. As they did all of this, they married young, had lots of children, settled down in a regular job and basically did as their parents had as a matter of tradition.

And why not? Cargo cult is what we do. Millions of people don’t currently eat shellfish and certain kinds of meat because doing so thousands of years ago killed people, and religious significance was ascribed to this phenomenon. A lot of our attitudes toward human sexuality were forged in the fires of Medieval outbreaks of syphilis. Even the “early to bed, early to rise” mantra and summer breaks for children so ingrained in our cultures are just vestigial throwbacks to years gone by when most people were farmers. We establish practices that are pragmatic. Then we keep doing them just because.

But the WWII veterans gave birth to a generation that came of age during the 1960s. And, as just about every generation does, this generation began superficially to question the traditions of the last generation while continuing generally to follow them. These baby boomers staged an impressive series of concerts and protests, affected real social policy changes, and then settled back into the comfortable and traditional arrangements known to all generations. But they did so with an important difference: they were the first generation forged in the fires of awareness of first-world, modern choice.

What I mean by that is that for the entirety of human history, people’s lots in life were relatively predetermined. Things like work, marriages, and having lots of children were practical necessities. This only stopped being true for the masses during the post-WWII boom. The “greatest generation” was the first generation that had choice, but the boomers were the first generation to figure out that they had choice. But figuring things like that out doesn’t really go smoothly because of the grip that tradition holds over our instinctive brains.

So the boomers had the luxury of choice and the knowledge of it, to an extent. But the old habits died hard. The expression of that choice was alive in the 1960s and then gradually ran out of steam in the 1970s. Boomers rejected the traditions and trappings of recorded human history, but then, by the 80s, they came around. By and large, they were monogamous parents working steady jobs, in spite of the fact that this arrangement was now purely one of comfort rather than necessity. They could job hop, stay single, and have no children if they chose, and they wouldn’t be adversely affected in the way a farmer would have in any time but modernity.

But even as they were settling down and seeing the light from a traditional perspective, a kind of disillusionment set in. Life is a lot harder in most ways when you don’t have choices about your fate, but strangely easier in others. Once you’re acing the bottom levels of Maslow’s Hierarchy, it becomes a lot easier to think, “if only I had dated more,” or, “I’m fifty and I’ve given half my life to this company.” And, in the modern age of choices, the boomers had the power to do something about it. And so they did.

In their personal lives, they called it quits and left their spouses. In the working world, they embarked on a quest of deregulation and upheaval. In the middle of the 20th century, the corporation had had replaced the small town as the tribal unit of collective identity, as described in The Organization Man. The concept of company loyalty and even existential consistency went out the window as mergers and acquisitions replaced blue chip stocks. The boomers became the “generation of divorce.” Grappling with tradition on one side and choice on the other, they tried to serve both masters and failed with gritty and often tragic consequences.

And so the millenials were the children of this experience. They watched their parents suffer through messy divorces in their personal lives and in their professional lives. Companies to which their parents had given their best years laid them off with a few months of severance and a pat on the butt. Or perhaps their parents were the ones doing the laying off — buying up companies, parceling them up and moving the pieces around. Whether personal or corporate, these divorces were sometimes no-fault, and sometimes all-fault. But they were all the product of heretofore unfamiliar amounts of personal choice and personal freedom. Never before in human history had so many people said, “You know what, I just figured out after 30 years that this isn’t working. So screw it, I’m out of here.”

So returning to the present, I find the notion that millenials harbor feelings of entitlement or narcissism to be preposterous on its face. Millenials don’t feel entitlement — they feel skepticism. They hesitate to commit, and when they do, they commit lightly and make contingency plans. They live with their parents longer rather than committing to the long-term obligation of a mortgage or even a lease. They wait until they’re older to marry and have children rather than wasting their time and affections on starter spouses and doomed relationships. And they job hop. They leave you before you can leave them, which, as we both know, you will sooner or later.

That generally doesn’t sit well with the older generation for the same reasons that the younger generation’s behavior never sits well with the older one. The older generation thinks, “man, I had to go through 20 years of misery before I figured out that I hated my job and your mother, so who are you to think you’re too good for that?” It was probably the same way their parents got angry at them for going to Woodstock instead of settling down and working on the General Motors assembly line right out of high school. Who were they to go out cavorting at concerts when their parents had already been raising a family after fighting in a war at their age?

So we can circle back around to the original questions by dismissing the “millenials are spoiled” canard as a reason to consider modern job hopping unethical. Generational stereotyping won’t cut it. Instead, one has to consider whether some kind of violation of an implied quid pro quo happens. Do job hoppers welch on their end of a bargain, leaving a company that would have stayed loyal to them were the tables turned? I think you’d be hard pressed to make that case. Individuals are capable of loyalty, but organizations are capable of only manufactured and empty bureaucratic loyalty, the logical outcome of which is the kind of tenure policies that organized labor outfits wield like cudgels to shield workers from their own incompetence. Organizations can only be forced into loyalty at metaphorical gunpoint.

Setting aside both the generational ad hominem and the notion that job hopping is somehow unfair to companies, I can only personally conclude that there is nothing unethical about it and that the consideration of whether or not to job hop is purely pragmatic. And really, what else could be concluded? I don’t think that much of anyone would make the case that leaving an organization to pursue a start-up or move across the country is unethical, so the difference between “job leaver” and “job hopper” becomes purely a grayscale matter of degrees.

With the ethics question in the books on my end, I’ll return next time around to discuss the practical ramifications for individuals, as well as the broader picture and what I think it means for organizations and the field of software development going forward. I’ll talk about the concept of free agency, developer cooperation arrangements, and other sort of free-wheeling speculation about the future.

By

Designs Don’t Emerge

I read a blog post recently from Gene Hughson that made me feel a little like ranting. It wasn’t anything he said — I really like his post. It reminded me of some discussion that had followed in my post about trying too hard to please with your code. Gene had taken a nuanced stand against the canned wisdom of “YAGNI.” I vowed in the comments to make a post about YAGNI as an aphorism, and that’s still in the works, but here is something tangentially related. Now, as then, I agree with Gene that you ignore situational nuance at your peril.

But let’s talk some seriously divisive politics and philosophy first. I’m talking about the idea of creationism/intelligent design versus evolutionary theory and natural selection. The former conceives of the life in our world as the deliberate work of an intelligent being. The latter conceives of it as an ongoing process of change governed by chance and coincidence. In the context of this debate, there is either some intelligent force guiding things or there isn’t, and the debate is often framed as one of omnipotent, centralized planning versus incremental, steady improvement via dumb process and chance. The reason I bring this up isn’t to weigh in on this or turn the blog into a political soapbox. Rather, I want to point out a dichotomy that’s ingrained in our collective conversation in the USA and perhaps beyond that (though I think that the creationist side of the debate is almost exclusively an American one). There is either some kind of central master planner, or there is simply the vagaries of chance.

I think this idea works its way into a lot of discussions that talk about “emergent design” and “big up front design,” which in the same way puts forth a pretty serious false dichotomy. This is most likely due, in no small part, to the key words “design,” “emergent” and especially “evolution” — words that frame the coding discussion. It turns into a blueprint for silly strawman arguments: “Big design” proponents scoff and say things like, “oh yeah, your architecture will just figure itself out magically” while misguided practitioners of agile methodologies (perhaps “no design” proponents) accuse their opponents of living in a coding universe lacking free will — one in which every decision, however small, must be pre-made.

But I think the word “emergent,” rather than “evolution” or “design,” is the most insidious in terms of skewing the discussion. It’s insidious because detractors are going to think that agile shops view design as something that just kind of winks into existence like some kind of friendly guardian angel, and that’s the wrong idea about agile development. But it’s also insidious because of how its practitioners view it: “Don’t worry, a good design will emerge from this work-in-progress at some point because we’re SOLID and DRY and we practice YAGNI.”

Now, I’m not going for a “both extremes are wrong and the middle is the way to go” kind of argument (absent any other reasoning, this is called middle ground fallacy). The word “emergent” itself is all wrong. Good design doesn’t ’emerge’ like a welcome ray of sunshine on a cloudy day. It comes coughing, sputtering, screaming and grunting from the mud, like a drowning man being pulled from quicksand, and the effort of dragging it laboriously out leaves you exhausted.

DragFromTheMud

The big-design-up-front (BDUF) types are wrong because of the essential fallacy that all contingencies can be accounted for. It works out alright for God in the evolution-creation debate context because of the whole omniscient thing. But, unfortunately, it turns out that omniscience and divinity are not core competencies for most software architects. The no-design-up-front (NDUF) people get it wrong because they fail to understand how messy and laborious an activity design really is. In a way, they both get it wrong for the same basic reason. To continue with the Judeo-Christian theme of this post, both of these types fail to understand that software projects are born with original sin.

They don’t start out beautifully and fall from grace, as the BDUF folks would have you believe, and they don’t start out beautifully and just continue that way, emerging as needed, as the NDUF folks would have you believe. They start out badly (after all, “non-functional” and “non-existent” aren’t words which describe great software) and have to be wrangled to acceptability through careful, intelligent and practiced maintenance. Good design is hard. But continuously knowing the next, feasible, incremental step toward a better design at absolutely any point in a piece of software’s life — that’s really hard. That takes deliberate practice, debate, foresight, adaptability, diligence, and a lot of reading and research. It doesn’t just kinda ’emerge.’

If you’re waiting on me to come to a conclusion where I give you a score from one through ten on the NDUF to BDUF scale (and it’s obviously five, right?), you’re going to be disappointed with this post. How much design should you do up front? Dude, I have no idea. Are you building a lunar rover? Probably a lot, then, because the Sea of Tranquility is a pretty unresponsive product owner. Are you cobbling together a minimum viable product and your hardware and business requirements may pivot at any moment? Well, probably not much. I can’t settle your design decisions and timing for you with acronyms or aphorisms. But what I can tell you is that to be a successful architect, you need to have a powerful grasp on how to take any design and make it slightly better this week, slightly better than that next week, and so on, ad infinitum. You have to do all of that while not catastrophically breaking things, keeping developers productive, and keeping stakeholders happy. And you don’t do that “up-front” or “ex post facto” — you do it always.

By

Throw Out Your Code

Weird as it is, here’s human nature at work. Let’s say that I have a cheeseburger and you’re hungry. I tell you that I’ll sell you the cheeseburger for $10. You say, “pff, no way — too expensive.” Oh well, I eat the cheeseburger and call it a day. But I’ve learned my lesson. The next day at lunch, to execute my master cheeseburger selling plan, I slide the cheeseburger over in front of you and tell you that you can have it: “you can have this cheeseburger…” Just as you’re about to take a bite, however, I cruelly say “…for ten dollars!” You grumble, get out your wallet and hand me a ten dollar bill.

This is called “The Endowment Effect,” and it’s a human cognitive bias that causes us to value what we have disproportionately. I blogged about it here previously in the context of why we think that our code is so good we should SPAM it all over the place with control-V. But even if you don’t do that (and, really, please don’t do that), you still probably get overly attached to your code. I do. After all, we, as humans, have a hard time defying our own natural instincts.

I’m certainly no anthropologist, but I suspect that our ancestry as nervous, opportunistic scavengers on the African Serengeti has everything to do with this. Going and snatching a morsel that a hungry lion is eyeing is a pretty bad idea. But if you already have the morsel, what the hey, you might as well take it with you as you run away. But, however we’re wired, we’re capable of learning and conditioning our own responses. After all, we don’t go bolting away from the deli counter after the guy there hands us our two pounds of salmon. We’ve learned that this is a consequence-free transaction.

It’s time to teach yourself that lesson as it relates to your code. It’s not so much that deleting functional code is consequence free (it isn’t). But deleting it isn’t nearly as big of a deal as you probably think it is. When it comes to code that you’ve spent two weeks writing, I’m pretty willing to bet that if you trashed it all and started from scratch (no peeking at source control history), you could rewrite it all in about two days. If that sounds crazy, ask yourself whether the majority of the time you spend programming is spent furiously typing as if you were taking a words-per-minute test or if most of it is spent drawing things on scratch-paper, squinting at your screen, pushing code around unit tests, muttering to yourself, and tapping a pen on your desk. I’m betting it’s the latter, and, when you rewrite, it’s activities from the latter that you don’t do nearly as much. You’ve already blazed a trail for yourself and now you’re just breezing through for a second trip.

Write some code and throw it out. Do a code kata with the stipulation that the code is deleted, never to be recovered. Then try it again the next day and the day after that. Or create a copy of your production code at work, engage in some massive, high-risk, high-wire-act refactoring, and then just delete it. With either of these things, I promise you that you’ll learn a lot about efficient coding and your code base, respectively. But you’ll also learn a subtle lesson: the value you’re creating as you code can be found more in the knowledge and experience you’re acquiring as you do it than the bits sitting in source control.

Practice throwing out your code so that you stop neurotically overvaluing it. Practice throwing out your code because it’ll probably happen by accident at some point anyway. Practice throwing out your code because your first crack at things usually kind of sucks. And practice throwing out your code because end users and the world are cruel, and not everything that you write is going to make it gift-wrapped into production. The more you learn to let go, the happier and more productive you’re going to be as a programmer.

By

Born to Exclude: Beware of Monoculture

Understanding the Idea of Monoculture

One morning last week, I was catching up on backlogged podcasts in my Doggcatcher feed on the way to work and was listening to an episode of Hanselminutes where Scott Hanselman interviewed a front end web developer named Garann Means. The subject of the talk was what they described as “developer monoculture,” and they talked about the potentially off-putting effect on would-be developers of asking them to fill a sort of predefined, canned “geek role.”

In other words, it seems as though there has come to be a standard set of “developer things” that developers are expected to embrace: Star Trek, a love of bacon (apparently), etc. Developers can identify one another in this fashion and share some common ground, in a “talk about the weather around the water cooler” kind of way. But this policy seems to go beyond simply inferring a common set of interests and into the realm of considering those interests table stakes for a seat at the geek table. In other words, developers like fantasy/sci-fi, bacon, that Big Bang Theory show (I tried to watch this once or twice and found it insufferable), etc. If you’re a real developer, you’ll like those things too. This is the concept of monoculture.

The podcast had weightier issues to discuss that the simple “you can be a developer without liking Star Trek,” though. It discussed how the expected conformance to some kind of developer archetype might deter people who didn’t share those interests from joining the developer community. People might even suppress interests that are wildly disparate from what’s normally accepted among developers. Also mentioned was that smaller developer communities such as Ruby or .NET trend even more toward their own more specific monoculture. I enjoyed this discussion in sort of an abstract way. Group dynamics and motivation is at least passingly interesting to me, and I do seem to be expected to do or like some weird things simply because I write software for a living.

TypicalCoder

At one point in the discussion, however, my ears perked up as they started to discuss what one might consider, perhaps, a darker side of monoculture–that it can be deliberately instead of accidentally exclusionary. That is, perhaps there’s a point where you go from liking and including people because you both like Star Trek to disliking and excluding people because they don’t. And that, in turn, leads to a velvet rope situation: not only are outsiders excluded, but even those with similar interest are initially excluded until they prove their mettle–hazing, if you will. Garann pointed out this dynamic, and I thought it was insightful (though not specific to developers).

From there, they talked about this velvet-roping existing as a result of people within the inner sanctum feeling that they had some kind of ‘birthright’ to be there, and this is where I departed in what had just been general, passive agreement with the points being made in the podcast. To me, this characterization was inverted–clubhouse sitters don’t exclude and haze people because they of a tribal notion that they were born into an “us” and the outsiders are “them.” They do it out of insecurity, to inflate the value of their own experiences and choices in life.

A Brush with Weird Monoculture and what it Taught Me

When I first met my girlfriend, she was a bartender. As we started dating, I would go to the bar where she worked and sit to have a beer, watch whatever Chicago sports team was playing at the time, and keep her company when it was slow. After a while, I noticed that there was a crowd of bar flies that I’d see regularly. Since we were occupying the same space, I made a few half-hearted efforts to be social and friendly with them and was rebuffed (almost to my relief). The problem was, I quickly learned, that I hadn’t logged enough hours or beers or something to be in the inner circle. I don’t think that this is because these guys felt they had a birthright to something. I think it’s because they wanted all of the beers they’d slammed in that bar over the course of decades to count toward something besides cirrhosis. If they excluded newbies, youngsters, and non-serious drinkers, it proved that the things they’d done to be included were worth doing.

So why would Star-Trek-loving geeks exclude people that don’t know about Star Trek? Well, because they can, with safety in numbers. Also because it makes the things that they enjoy and for which they had likely been razzed in their younger days an asset to them when all was said and done. Scott talked about being good at programming as synonymous with revenge–discovering a superpower ala Spiderman and realizing that the tables had turned. I think it’s more a matter of finding a group that places a radically different value on the same old things that you’ve always liked doing and enjoying that fact. It used to be that your skills and interests were worthless while those of other people had value, but suddenly you have enough compatriots to reposition the velvet rope more favorably and to demonstrate that there was some meaning behind it all. Those games of Dungeons and Dragons all through high school may have been the reason you didn’t date until twenty, but they’re also the reason that you made millions at that startup you founded with a few like-minded people. That’s not a matter of birthright. It’s a matter of desperately assigning meaning to the story of your life.

Perhaps I’ve humanized mindless monoculture a bit here. I hope so, because it’s essentially human. We’re tribal and exclusionary creatures, left to our baser natures, and we’re trying to overcome that cerebrally. But while it may be a sympathetic position, it isn’t a favorable or helpful one. We can do better. I think that there are two basic kinds of monoculture: inclusive, weather-conversation-like inanity monoculture (“hey, everyone loves bacon, right?!?”) and toxic, exclusionary self-promotion. In the case of the former, people are just awkwardly trying to be friendly. I’d consider this relatively harmless, except for the fact that it may inadvertently make people uncomfortable here and there.

The latter kind of monoculture dovetails heavily into the kind of attitudes I’ve talked about in my Expert Beginner series of posts, where worth is entirely identity-based and artificial. I suppose I perked up so much at this podcast because the idea of putting your energy into justifying why you’ve done enough, rather than doing more, is fundamental to the yet-unpublished conclusion of that series of posts. If you find yourself excluding, deriding, hazing, or demanding dues-paying of the new guy, ask yourself why. I mean really, critically ask yourself. I bet it has everything to do with you (“I went through it too,” and, “it wouldn’t be fair for someone just to walk in and be on par with me”) and nothing to do with him. Be careful of this, as it’s the calling card of small-minded mediocrity.

By

It’s a Work in Progress

I’ll have that for you next week. Oh, you want it to do that other thing that we talked about earlier where it hides those controls depending on what the user selects? I’ll have it for you next month. Oh, and you also want the new skinning and the performance improvements too? I’ll get started and we’ll set about six months out as the target date. Unless you want to migrate over to using Postgres. Then make it a year.

Have you ever had any conversations that went like this? Ever been the person making this promise or at least the one that would ultimately be responsible for delivering it? If so, I bet you felt like, “man, I’m totally just making things up, but I guess we’ll worry about that later.” Have you ever been on the hearing end of this kind of thing? I bet you felt like, “man, he’s totally just making things up, and none of this will ever actually happen.”

As you’ll know if you’ve been checking in at this blog for a while, my opinions about the “waterfall methodology” are no secret. It isn’t my intention to harp on that here but rather to point out that the so-called waterfall approach to software development is simply part of what I consider to be a larger fail: namely, putting a lot of distance between promises and deliveries.

I get the desire to do this–I really do. It’s the same reason that we root for underdogs in sports or we want the nice guy not to finish last. Having success is good, but sudden, unexpected success is awesome. It creates a narrative that is far more compelling than “guy who always delivers satisfactory results does it yet again.” Instead, it says, “I just pulled this thing called an iPod out of nowhere and now I’m going to be the subject of posthumous books and cheesy project manager motivational calendars in a decade.”

Companies have done things like this for a long, long time–particularly companies that sell tangible goods. It’s been the story of a consumer-based world where you innovate, patent, and then sell temporarily monopolized widgets, making a fortune by giving the world something new and bold. You probably wind up on the cover of various magazines with feel-good stories about the guy behind the revolutionary thing no one saw coming, except for one visionary (or a team of them) toiling away in anonymity and secrecy, ready to pull back the curtain and dazzle us with The Prestige.

Magician

But here’s the thing. The world is getting increasingly service oriented. You don’t really buy software anymore because now you subscribe and download it from “the cloud” or something. Increasingly, you rent tools and other things that perhaps you would previously have purchased. Even hardware goods and electronics have become relatively disposable, and there is the expectation of constant re-selling. Even the much-ballyhooed innovator, Apple, hasn’t really done anything interesting for a while. We’re in a world where value is delivered constantly and on a slight incline, rather than in sudden, massive eruptions and subsequent periods of resting on laurels.

What does the shifting global economic model have to do with the “waterfall” method of getting things done? Well, it pokes a hole in the pipe-dream of “we’ll go off and bury ourselves in a bunker for six months and then emerge with the software that does everything you want and also makes cold fusion a reality.” You won’t actually do that because no one really does anymore, and even back when people did, they failed a lot more often than they succeeded in this approach.

I love to hear people say, “it’s a work in progress.” That means that work is getting done and those doing it aren’t satisfied, and those two components are essential to the service model. These words mean that the speaker is constantly adding value. But more than that, it means that the speaker is setting small, attainable goals and that those listening can expect improvements soon and with a high degree of confidence. The person doing the work and the target audience are not going to get it perfect this week, next week, next month, or maybe ever. But they will get it better and better at each of those intervals, making everyone involved more and more satisfied. And having satisfaction steadily improve sure beats having no satisfaction at all for a long time and then a very large chance of anger and lawsuits. Make your work a work in progress, and make people’s stock in your labor a blue chipper rather than a junk bond.