DaedTech

Stories about Software

By

Uber-Architects: The Building Metaphor Is Dead

Building Bad Software is Like Building a Tower

A little over a year ago, I wrote a post explaining how you can use the metaphor, “building software is like building a tower” to help you write bad software. The idea was that building something like a skyscraper requires insane amounts of planning because the actual act of building is laborious, time-consuming, and expensive, and also pretty much impossible to change once you get going. Furthermore, the real brains of the operation is required for the up-front planning, which is then done in such detail that the actual construction is a pretty straight-forward task that doesn’t require a whole lot of thinking — just following of detailed instructions and a practiced knack for skills like assembling plumbing, taping drywall joints, etc.

Software really only follows this pattern if it’s awful. If you were to describe a project you were working on by saying, “it’s important that we get everything right from the get-go because once we get started, this system is going to be so set and rigid that change is impossible,” wouldn’t you be a little alarmed and/or defeated? And what about the idea that the developer is so wildly different from the architect that they have two entirely separate vocational training paths (in the case of buildings, architectural studies versus carpentry trade school or apprenticeship). Is planning how to write software so very different than writing software?

I believe you’d be pretty hard pressed to continue to like this metaphor when you give it a lot of thought. But that hasn’t stopped the metaphor from being iconic in our industry, to the extent that it still vestigially governs roles, titles, career paths, and team behavior. Even though building software is nothing like structural construction, we continue to have a very real role/title called “Architect” that is responsible for coming up with documentation that looks suspiciously like a set of blueprints so that the lower paygrade laborers can toil away while he supervises.

Is this the best role and set of duties for the person called “architect” — the person on the team who has probably gotten to the position by being a good developer for a long time (or, more cynically, by being a mediocre and tenured Expert Beginner)? Should the result of spending years getting good at writing software be that you get to be in charge of the software by writing less of it? Are the architects of our buildings the people who are really, really good at pouring concrete and hanging drywall? Obviously not. And while we’re at it and this frayed metaphor is truly breaking down, should we even call them architects any more?

A Bit of Philosophy

I think that those of us who occupy the role or aspire to it should perhaps start striving to become post-architects or Uber-Architects. I’m borrowing this latter term from Friedrich Nietzsche, a philosopher and sporter of an awesome mustache that wrote about what he called the “Ubermensch,” which I believe translates from German to something along the lines of “over-man.” Nietzsche’s concepts in “Thus Spake Zathustra” are extremely nuanced and layered, but I’ll summarize what I took out of it when I read it and how I think it relates.

Nietzsche

You might know Nietzsche as the philosopher who famously said, “God is dead,” and he said that in this book. I believe that this is largely interpreted as a profoundly and stridently atheist sentiment, but that interpretation is one that ignores the context of the work and his other works. He was looking at a world (he lived in Europe during the 1800s) where morality had long been a matter simply dictated by the tenets of Christianity, but also in a world where the rejection of religion was becoming increasingly common. Nietzsche wasn’t gloating over the corpse of God; he was expressing worry that a growing atheist/agnostic segment of the population would opt for nihilism in the absence of any religion and that society would become dominated by what he called something like “last man,” a rather wretched creature interested only in its own comfort and with no direction or broader purpose. He was saying, “I don’t care for your religion, but if you take it away, I’m worried that things will get a lot worse.”

From Nietzsche’s perspective, 19th century Europe was in trouble and the path forward was for mankind to become the “Ubermensch,” a version of man that was capable of supplying himself with all of the things for which religion had previously been responsible. Basically, he should do good in life because it’s good to do good rather than because he’ll be in trouble if he doesn’t. He should define his own purpose and leave his mark on the world because it’s the right thing to do and the highest calling for an individual would be to leave a mark on history for the better. In the absence of the previous and declining moral order, a new, sustainable one had to be defined, so his argument went (or at least my recollection of my reading and understanding of it).

Forget the religious angle here in a discussion of software. I’m not interested in discussing the merits of Nietzsche’s religious beliefs or lack thereof here. But I am interested in relating his perception of the world to our situation. Throughout the history of software development, our roles have been defined by this now flagging and failing metaphor of “software is like building a tower.” We’ve mimicked construction in our historical approach with lengthy and detailed planning along with the division of labor. We’ve gone so far as to borrow the titles for the roles in that line of work and appropriate them for ourselves. Your software group has to have an “architect” that will make the “blueprints” for the software. But that’s absurd and people are starting to realize it (see the growth of agile methodologies that have no equivalent at all in the construction world).

The danger then becomes what happens in the absence of that metaphor. Do we adopt improvident and cavalier approaches to software architecture, swinging the other way on the “lots of planning versus no planning” pendulum? Do we abolish the role of any kind of technical leader and make all software development groups pure democracy? Do the former architects or developers in general become “last architects,” just nihilistically banging out whatever code seems interesting or gets them out the door at 5 without worrying over the future or the needs of the business?

Emergence of the Uber-Architect

This is where the Uber-Architect comes in. The Uber-Architect deals not in blueprints and orders from on high but from leadership by example in the trenches. Uber-Architecture isn’t about web services, database technologies, N-Tiers or enterprises, but about teaching and demonstrating important fundamental concepts of the craft: abstractions, design trade-offs, and implementation patterns. Uber-Architects don’t create a bunch of rules and enforce them across large organizations for consistency’s sake, like a foreman with a clipboard overseeing hundreds of fungible laborers. They pick up the hammers and nails and work along side those workers, showing them how it’s done, building the thing together, and making those around them better until it is no longer a fungible collection of workers, but a humming, autonomous machine that’s more than the sum of its parts. They leave their mark on the group not because they’re “architect” but because it’s the right thing to do, and it’s something of which they can be proud.

So what do teams look like when all of this comes to pass? I don’t know, exactly. But I think we’re finding out. I think that we’re going to see more and more teams with flatter structures, less worried about seniority, and more buying into the agile concept of self-organizing teams. And on those teams, there is no architect because people doing a good job of building software won’t assemble into organizational structures that are ill suited to building software. On these teams, there will only be Uber-Architects, who don’t hold a position afforded to them by 15 years in with the company, but who hold a place of respect among their peers due to ability and vision, and who create design concepts that make the lives of those around them easier and the skills of those around them sharper.

If this sounds overly idealistic, perhaps it is, but that’s because I view it as a goal and something to start reaching toward. And besides, with all of the cynical posts I make about Expert Beginners and overrated people and whatnot, some starry-eyed optimism probably balances out the cosmic scales a bit.

By

Beware of The Magnetars in Your Codebase

Lately, I’ve been watching a lot of “How the Universe Works” and other similar shows about astronomy. I’ve been watching them a lot, as in, I think I have some kind of problem. I want to watch them and find them fascinating and engaging and yet I also seem suddenly to be unable to fall asleep without them on.

Last night, I was indulging this strange problem when I saw what has to be the single most intense thing in the universe: a magnetar. Occasionally when a massive star runs out of fuel in its core, it explodes into as a supernova and spews matter and radiation everywhere, sending concussive shock waves hurtling out into the universe. In the aftermath, the rest of the star that doesn’t escape out collapses in on itself into an unimaginably dense thing called a “neutron star,” which is the size of Manhattan but weighs as much as the sun (for perspective, a sugar cube of neutron star would weigh as much as all of the people on earth combined).

One particularly exotic type of neutron star is called a magnetar. It’s a neutron star with a magnetic field of absolutely mind-boggling strength and a crust made out of solid iron (but up to 10 billion times stronger than steel, thanks to the near-black-hole-like gravity of the star crushing imperfections out of the crystals that form the crust). A magnetar is so intensely magnetized that if the moon were a magnetar (and forget the gravity for a moment) it would tear the watch off of your wrist and render your credit cards useless. This thing rotates many times per second, whipping its magnetic field into a frenzy and sloshing the ultra-dense neutron goo that makes up its core into a froth until the internal pressure causes something called a “starquake,” which, if it were measured on a the Richter scale, would be a 32. When these starquakes happen, the result is that the magnetar spews a torrent of radiation so powerful that it has a profound effect on the earth’s magnetic field and atmosphere from halfway across the Milky Way.

So to recap, a magnetar is a tiny thing leftover from a huge event that’s not really visible or particularly noticeable from a distance. At least, it isn’t noticeable until the unimaginable destructive force roiling in its bowels is randomly unleashed, and then it pretty much annihilates anything in its close vicinity and has a profound effect universally.

Magnetar

Image courtesy of wikipedia

I was idly thinking about this concept today while looking at some code, and I realized something. How many projects do you work on where there’s some kind of scramble or to get some new feature in ahead of schedule, to absorb scope creep and last minute changes, or to slam some kind of customization into production for a big client with a minimum of testing? Whether this goes well or poorly, the result is generally spectacular.

And when the dust settles and everyone has taken their two or three weeks off, come down from the ledge and breathed a sigh of relief, the remnants of the effort is often some quiet, dense, unapproachable and dangerous bit of code pulsing in the middle of your code base. You don’t get too near it for fear that it will tear the watch off of your wrist or result in a starquake — okay, more accurately, that it will introduce some nasty regression bug — and you just kind of leave it there to rotate and pulse ominously.

Much later, when you’ve pretty well forgotten it, it erupts and unleashes a torrent of devastation into your team’s life. One day you suddenly recall (one day too late) that if you don’t log into that one SQL server box and restart that scheduled task on any March 1st not in a leap year, all 173,224 users of the Initrode account are suddenly unable to log into anything in their ERP system, and they’re planning a shipment of medical supplies to hurricane victims and abused puppies. You’ve had all of the atoms in your organization pulverized out of existence by the flare of a magentar in your code base.

How do you avoid this fate? I’ll give you a list of two:

  1. Do the right thing now.
  2. Push back against creating the situation in the first place.

The first one is the more politically tenable one in organizations. The business is going to do what the business is going to do, and that’s to allow sales to promise clients a cure for cancer by June 15th if they promise to pitch in personally for steak dinners for the dev team, on their honor. It can be hard to push back against it, so what you can do is ride the storm out and then make sure you carve out time to repair the damage when the dust settles. Don’t let that rogue task threaten your very existence once a year (but not in leap years). And don’t cop out by documenting it on a wiki somewhere. Do the right thing and write some code that automates whatever it is that should trigger it to happen. While you’re at it, automate some sort of reminder scheme for monitoring purposes and some fault tolerance, since this seems pretty important. You may have needed to hack something out to meet your deadline, but there’s nothing saying you have to live with that and let it spin and pulse its way to bursting anger.

The better solution, though, is to push back on the business and not let supernovae into your development process in the first place. This is hard, but it’s the right path. Instead of disarming volatile things that you’ve introduced in a pinch, avoid introducing them altogether. Believe it or not, this is a skill that actually takes practice because it involves navigating office-political terrain beyond simply explaining things to someone in rational fashion and prevailing upon their good judgment.

But really, I could boil these two points down to one single thing that logically implies both: care about the fate of the project and the codebase. If you invest yourself in it and truly care about it, you’ll find that you naturally avoid letting people introduce explosive forces in the first place. You certainly don’t allow alien, stealth deathbombs to fester in it, waiting to spew radiation at you. Metaphorical radiation, that is. Unless you code for a nuclear power company. Then, real radiation.

By

Kill Tech Patents with Fire And Do It Now

I’ve actually had a few spare hours lately to get ahead on blogging, so I was just planning to push a post for tomorrow, read a little and go to sleep. But then I saw an article that made me get a fresh cup of water, turn my office lamp on, and start writing this post that I’m going to push out instead. There probably won’t be any editing or illustration by the time you read this, and it might be a little rant-ish, so be forewarned.

Tonight, I read through this article on Ars Technica with the headline “Patent War Goes Nuclear.” I think the worst part about reading this for me was that my reaction wasn’t outrage, worry, disgust or really much of anything except, “yep, that makes sense.” But I’ll get back to my reaction in a bit. Let me digress here for a moment to talk about irony.

Irony is a subject about which there is so much debate that the definition has been fractured and categorized into more buckets of meaning than I can even count off the top of my head. There is literary irony, dramatic irony, verbal irony and probably more. There are various categories of era-realated irony, such as Classical (Greek) irony, Romantic irony, and, most recently, whatever hipsters are and whatever they do. With all of these different kinds of ironies, the only thing that the world can seem to agree on is that things in the Alanis Morissette song about “ray-e-ay-ain on your wedding day” are not actually ironic.

The problem for poor Alanis, now the object of absurd degrees of international nitpicking derision, is that there is no ultimate reversal of expectation in all of the various ‘ironic’ things that happen in her song. Things are generally considered to be ironic when there is a gap between stated expectations or purpose and outcome. When it rains on your wedding day, that just sucks — it’s not ironic. It rains a good number of days of the year, so no reasonable person would expect that it couldn’t rain on a given day. What would most likely be considered ironic is if you opted to have your wedding inside to avoid the possibility of getting wet, and a large supply line pipe burst in the floor above you during the wedding, drenching everyone in attendance.

Another pretty clear cut example of irony is the US Patent System as it exists today when compared with common perception as to the original and ongoing purpose of such an institution. There’s a rather fascinating and somewhat compelling argument that claims the concept of intellectual property (and specifically patents) were instrumental in creating the Industrial Revolution. In other words, there was historically little motivation for serf and merchant classes to innovate and optimize their work since the upper classes with the means of production would simply have stolen the ideas and leveraged better economies of scale and resources to reap the benefits for themselves. But along came patents and the “democratization of invention” to put a stop to all that and to enable the Horatio Algiers (or perhaps Thomas Edisons) of the world to have a good idea, march on down to the patent office, and make sure that they would be treated fairly when it came to reaping the material benefits of their own ideas.

On the other side of the coin, I’ve read arguments that offer refutations of this working hypothesis, and I’m not endorsing one side or the other, because it really doesn’t matter for my purposes here. Whether the “democratization of invention” was truly the catalyst for our modern technological age or not, the perception remains that the patent system exists to ensure that the little guy is protected and that barriers to entry are removed to create truly free markets that reward innovation. If you have the next great idea, you go find a lawyer to help you draft a patent and that’s how you make sure you’re protected from unfair treatment at the hands of evil corporate profiteers.

So where’s the irony? I’ll get to that in a minute, but first another brief digression. I want to talk now about the concept of a “defensive patent,” at least as I’ve experienced the concept. Many moons ago, I maintained a database application to manage intellectual property for a company that made manufacturing equipment. At this company, there was a fairly standard approach to patenting, which was “mention everything you’re working on to the Intellectual Property team who will see if perhaps there’s anything we can claim patents on — and we mean everything.” The next logical question was “what if it’s already obvious or unrelated to what we’re trying to do,” to which the response of “what part of everything wasn’t clear?” The reason for this was that the goal wasn’t to patent things so that the company could make sure that nobody took its ideas but rather to build up a war-chest of stockpiled patents. A patent on something not intended for use was perfectly fine because you could trade with a competitor that was trying to use a patent to extort you. Perhaps you could buy and sell these things like securities packages in a portfolio. And, to be perfectly honest, my company was pretty reputable and honest. They were just trying to avoid getting burned — don’t hate the player, hate the game. “Defensive” patents had nothing to do with protecting innovation and everything to do with leverage for endless series of lawyer-enriching, negative sum games played out in court.

As I said, that was some years ago, and in the time that’s elapsed since, this paradigm seems to have progressed to the logical conclusion that I pictured back then (or perhaps I just wasn’t aware of it as much back then). Patents had started as legal protection, evolved to become commodities and have now reached the point of being corporate currency, devoid of any intrinsic meaning or value. In the article that I cited, a major tech company (Nortel) went bankrupt and its competitors swooped in like buzzards to loot its corpse. For those of you who played the Diablo series of games, this reminds me of when a dead player would “pop” and everyone else in the game would scramble to pillage his equipment. Or perhaps a better metaphor would be that a nuclear power had fallen into civil war and revolution and neighboring countries quietly stepped in to spirit away its massive arms stockpile, each trying to grab up as much as possible for fear that their neighbors were doing the same and getting ready to use it against them.

Microsoft, Apple, and some other players stepped in to form a shell company and bid against Google for this cache of patents, and Google wound up losing all the marbles to this cartel. Now, fast forward a few years and the cartel has begun shelling Google. How does all of this work exactly? It works because of the evolution of the patent that I mentioned. The patents are protecting nothing because that isn’t what they do, and they have no value as commodities because they’re packaged up into patent “mutual funds” (arsenals) that only matter in large quantities. You don’t get patents in our world to protect something you did, and you don’t get them because they have some kind of natural value the way an ear of corn does — you get them for the sole purpose of amassing them as a means to an end. And, as with any currency, the entities that have the easiest time acquiring more are the ones that already have the most.

So, there is the fundamental irony of the patent system. It’s a system that we conceive of existing to protect the quirky genius in his or her workshop at home from some big, soulless corporation, but it’s a system that in practice makes it easier for the big, soulless corporation to smash the quirky geniuses like bugs or, at best, buy them out and use them as cannon fodder against competitors. The irony lies in the fact that a system we take to be protecting our most valuable asset — our ability to innovate — is actually killing it. The patent system erects massive barrier to entry, rewards unethical behavior, creates a huge drain on the economy and makes bureaucratic process and influence peddling table stakes for success at delivering technological products and services. This is why I had little reaction to a shell company suing Google in a looming patent Armageddon — it just seems like the inevitable outcome of this broken system.

I doubt you’ll find many people that would dispute the notion that our intellectual property system needs serious overhaul. If you google “patent troll” and flip over to news, you’ll find plenty of articles and op-eds in the last month or even the last week. The fact that abuse of the system is so rampant that there’s an endless news cycle about it tells you that there are serious problems. But I think many would prefer to solve these problems by modifying the system we have now until it works. I’m not one of them. I think we’d be better served to completely toss out the system we have now and start over, at least for tech patents (I can see a reasonable case for patents in the field of medicine, for instance). I don’t think it can be salvaged, and I think that I’d answer the question “are you crazy — wouldn’t that result in chaos and anarchy?” with the simple opinion, “it can’t possibly be worse than what we have now.”

In the end, I may be proved wrong, particularly since I doubt torching the tech IP system is what’s going to happen. I hope that I am and I hope that efforts to shut down the trolls and eliminate situations where only IP lawyers win are successful, but until I see it, I’ll remain very skeptical.

/end rant

Back to regularly scheduled techie posts next week. 🙂

By

Beware of Mindless Automation

Something I’ve seen a lot over the years is a tendency to locally maximize when it comes to automating processes. We’re software developers, and thus automation is what we do. But not all automation is created equally, and some of it can be fairly obtuse and misguided if we aren’t careful. And the worst part is that it’s pretty easy to fall into this trap.

For example, let’s say that you observe some people in your organization following a process. They have some Microsoft Word template that they’ve stored somewhere and they regularly open it up and fill it out with data that they pull from an internal system. They populate things like today’s date and various data points and then they do some light formatting based upon various criteria, such as putting items in red if they fall below a certain threshold. When finished, they print out the result, drop it in an envelope, and mail it to another office location of the company. At that location, they process the data and put it into the system — you don’t know too much about that system because it’s not your office location, but that’s the general gist of it.

So, what do you do if you have some spare time and empathy for manual process on your hands and are looking to make a name for yourself? Do you automate this process for them, to their many thanks and heaped praise? And, assuming you do, how do you do it? Do you write some code that pulls the necessary data from your internal system, fires up MS Word interop, and starts automatically generating the documents they’re using? Then, flush with success from that project, do you also automate the printing of the envelopes and metering of the postage?

If you do, how does that go as a function of time? I bet the users are very grateful at first, but then they come to rely on it. And, what’s more, they like the system less and less over the course of time. Every time the USPS changes the price of postage you have to go into this system and made changes, and, what’s worse is that the part that generates the documents seems to break every time there’s a new version or even an update to Word. And when the format of the documents that the other office is requesting changes, suddenly you’ve got a real project on your hands, since automating intricate, form Word documents is about as much fun as spending the afternoon trying to cram a decade of your life onto a one-page resume. Wasn’t this supposed to be helpful? Weren’t you the hero? Does no good deed go unpunished?

Let’s go back to the point where you decided to help. Was the automation as you conceived it worth doing or was it sort of marginal? I mean, you’re probably saving a few minutes for people and some fat-fingering opportunities, but what you still have is sort of an involved, manual process. What if you had stopped to think about the process and the larger goal: getting data from one system into another? Might you not have been talking about things like “web service” or at least “file transfer” instead of things like “Word interop” and “postage?”

Here’s the rub. When your users are solving your problems, they think like users and not like software developers. As such, they come up with non-programming, user solutions. Normal computer users understand MS Word and sending things via mail (or at least email), so they come up with processes that feature those tools. You’re a programmer. By all means, automate, but don’t mindlessly automate whatever they happen to be doing. That’s an optimization tweak. Real software engineering is about using software to create simple solutions to problems. I’ve seen many people fall into this trap and have fallen into it myself. When you’re writing software, asking “why” is invariably more important than asking “how.”

By

Laughing All The Way to The Bankruptcy

I was in sort of a smart-ass mood the other day, and I found myself thinking of something incredibly random. Do you remember that show or special called “To Catch a Predator” in which Chris Hansen and a crew of people would set up sting operations for sickos that would try to meet up with young girls? I found myself wondering, “if that crew has LinkedIn profiles, what are they getting endorsed for?” Pederast Snaring? Child Impersonation? What awful skills to have, public service notwithstanding. How might one get much better skills there?

Well, one way is simply to make them up. That would probably be effective. I’ve been endorsed for Ruby and Objective C lately, two languages in which I’ve never written a line of code. If my skills for that can get endorsed, presumably my skills for anything can get endorsed, whether or not they exist. If I wanted to be the most interesting man in the world, what skills would I have? I started a list of things I’d like to see in my profile, in no particular order, to make people say “whoa — I want to know that dude’s story!”

  • Extreme Ice Fishing
  • Dexterity
  • Octopus Farming
  • Dark Arts
  • Phrenology
  • Bear Whisperer
  • Inverted Breathing
  • Street Pharmacy
  • Boomerang
  • NFL Quarterbacking
  • 19th Century Russian Classics Author
  • Gonzo Meditation

One could create a pretty bizarre and interesting composite and probably get plenty of endorsements for it. In fact, I might get more endorsements for these things than programming languages I don’t know because amused contacts would probably go out of their way to endorse me for Octopus Farming. And as I contemplated this, it occurred to me. LinkedIn is a joke.

I don’t mean that in the sense of “oh, it’s really gone downhill in quality,” but I mean that it is actually, seriously now a joke. I can’t remember the last time I heard the site mentioned in casual conversation where the conversation wasn’t about how stupid and funny the skill endorsement system is. And that’s really strange, given its history.

As far as social networks go, each one seems to kind of have its niche and feel. Twitter is sort of like a bar, where there’s rapid-fire, disjointed conversations and things often escalate quickly due to out of context remarks. Google+ is like an abstract art museum. There’s a lot to see, but it’s generally from the same few people, and the place is really weird and quiet. Facebook is like going to a children’s playground. People you don’t know very well bore you with pictures and stories of their children, and the whole world is watching, so nothing happens that isn’t completely vanilla and boring. LinkedIn’s niche has historically been to serve as office space that hosts professional conferences or associations. People meet up there to exchange contact info and have professional conversations.

But that’s all changed more recently with the asinine endorsements and the uptick in punishing recruiter spam. Now it’s like a professional conference brought to you by Amway and Satan where reality shows like “America’s Dumbest Celebrities marry America’s Fattest Ex-Ball Players” stream constantly. If you want to show up and talk about your craft, you have to hear about a super business idea for cleaning products and watch a few short episodes of utter crap before you get down to business. It feels like you’re at the midway point on a journey from professional conference to brothel, where there are just enough booth babes to make everyone uncomfortable.

Amway

So the joke must be on LinkedIn, right? They haven’t figured out just how stupid the world thinks the endorsement system is, and they’ll probably be horrified when they do? Nope. They know it, and they’re laughing all the way to the bank, because here’s how it works. Recruiters can sign up for LinkedIn to recruit people for jobs, and they pay LinkedIn for this privilege in terms of the amount of people they contact and the amount of responses they receive to their overtures. So LinkedIn is incentivized to ensure the highest volume of contact between recruiters and users of their site, and what better way to do that than a system of ‘endorsement’ that is completely trumped up and phony? LinkedIn has all the motivation in the world to create a system where every person is endorsed for every skill and every recruiter emails everyone.

So while we laugh at their joke of an endorsement system and bemoan the fact that we get endless contact requests and junk mail through their site from recruiters, they’re raking it in. And people tend to give in to the sunk cost mentality of not quitting something that they’ve spent a good bit of time over the years building up. People aren’t likely to abandon all of their contacts. But here’s the trouble, strategically. People aren’t going to rage-quit LinkedIn, but their perception of it will alter as it careens toward internet brothel. They’ll stop viewing it as the professional social network and start viewing it as a spam conduit, and they’ll just stay away. Every time they get an email that Judy has endorsed them for “Time Management” and “Multi-Tasking,” they’ll just say, “ugh” and inch ever closer to creating a filter rule in their inbox to send LinkedIn emails to their junk folders. They’ll just quietly stop coming back.

And over the course of time, Linked-In will cease to be a social network. It will instead become a stream of information from LinkedIn’s recruiters (their customers) to the rest of us non-premium users (the product). And then they’ll be no different than Career Builder, Monster, and Dice — with the difference being that those sites never pretended to be anything other than what they are. Perhaps I’m overreacting, but I just get the sense that LinkedIn has jumped the shark. It’s a silly place, and I don’t much care for going there anymore.