DaedTech

Stories about Software

By

Intro to Unit Testing 8: Test Suite Management and Build Integration

It’s been over a month now since my last post in this series, and for that I sort of apologize. I think I’ve been channelling all of my instructive energy into my now-finished Pluralsight course, leaving the blog largely for opinions, screeds, and a random hiring announcement. So, let’s get back on track and wrap this thing up. I have this post and another one slated and then we can call it a day.

So far, I’ve talked quite a lot about how and when (and when not) to write unit tests. I’ve offered up some techniques for helping you isolate the classes that you want to test, including the use of test doubles. And finally, I offered some advice on how to get people to leave you alone and let you write tests. So now I’d like to turn and offer some advice beyond just writing the things. You need to live with them, manage them and leverage them over the course of time.

Managing the Suite

You’ve built them. So, now what? At some point, you’ll wonder exactly when you’re getting started. For the first few or even few dozen classes you test, you’ll alternate between some exasperation at spending extra time doing something new and satisfaction at, well, doing something new. But then, at some point, you’ll be sitting around and notice that your test suite has like 400 tests and think, “wow, that’s a lot of code… do I really want all this?”

That feeling will hit you even harder when you go to change something under a tight deadline and your real quick change makes a test go red. You’re pretty sure the test is broken because it was testing the old way of doing things, so you really just want to comment out the test and you wonder why it’s such a pain to change the code. Why do you have to waste so much time to change one line of code?

The answer to these questions lies in practice but also effective test suite management. If you let the unit test suite become a boat anchor, it will drag you down. Your frustration will be real and reasonable, rather than just a temporary product of you being in a hurry and unfamiliar with working in a code base under test. You need to take care to prevent this from happening, and I’m going to tell you how in this section.

Name Your Tests Clearly and Be Wordy

When you’re writing a unit test, you’re looking at code. But when you’re running your test suite, you aren’t most of the time, and when you’re trying to understand why a run or a build failed, you’re never looking at code. When the test suite is failing, you don’t want to waste time figuring out why. And having to open the IDE, navigate to the test, read the code and figure out the problem is a waste of time.

Don’t give your test methods names like “Test24” or “CustomerTest” or something. Instead, give them names like “Customer_IsValid_Returns_False_When_Customer_SocialSecurityNumber_Is_Empty”. That method name may seem ridiculous, especially if you’re used to giving methods short names, but trust me, you’ll be thankful for it. When your build is failing, which of these method names would you rather see an X next to? Would you rather be saying “looks like test 24 is failing,” or would you rather be saying, “oh, I wonder why someone made it so that an empty SSN is now considered valid?” If you say the first one, you’re lying.

This may seem unimportant in the scheme of things, but it’s the difference between associating frustration and confusion with your test suite and viewing it as a warning system for potentially undesirable changes. The test suite needs to be communicating clearly to you what’s wrong. Descriptive test names help do that and they help you identify whether it’s your code or the test itself that needs to be changed in the face of changing requirements.

Make Your Test Suite Fast

Ruthlessly delete and cull out slow tests. I can’t say it more plainly than that. A good test suite runs in seconds, max. If yours starts to take minutes, or God forbid, hours, then it’s rotting and becoming useless to you. Think of it this way — if it takes several minutes to run the test suite, how often are you going to do it? Every time you make a change, or just when you check in? If it takes hours, will you ever run it voluntarily?

If your test suite takes a long time to run, nobody will run it. Short feedback loops are of paramount importance to developers, and we optimize for efficiency. If the unit test suite is inefficient, we’ll find other ways to get feedback. As such, it is incredibly important to ensure that your test suite always runs quickly. Treat it as if the rest of your team were waiting for any legitimate excuse not to use the test suite, and don’t let inefficiency be that excuse.

Test Code is First Class Code

A common mistake that I see among those relatively new to testing is test code that’s something of a mess. The code will be brittle, heavily duplicated, weird, and hard to read. In short, your tests and test classes will contain code that you wouldn’t be caught dead putting into production.

Don’t do that. Treat your test code as if it were any other code. Eliminate duplication. Factor common functionality out into methods. Be descriptive with naming and with the flow of the method. Keep that code clean. I get that there’s a desire when it comes to testing to make as much of a mess as possible in the “bug bash” sense of throwing chaos at the situation and proving that your code can handle it, but the chaos needs to be controlled, and you can control it by keeping your test code clean and maintainable. If the tests are clean and easy to maintain, people won’t mind going in periodically to make an adjustment. If they’re unruly, people will get annoyed and comment them out or stop running them.

Have a Single Assertion per Test

This is a subtle one, but it also goes toward maintainability. If you start writing tests that have 20 asserts in them, you may feel good that you’re exercising a whole section of the code, but really you’re making things hard for yourself later. If all 20 tests pass (or at least the first 19), then all will be executed. But if the first one fails, none of the rest get executed. This means that in test methods with lots of asserts, it’s not always clear where they’re failing, which means it’s not always clear what’s going wrong.

In order for your test suite to be an asset, it has to be a clear indicator of what’s going wrong. Which would you find more useful in your car: a series of many different lights with helpful diagrams that lit up to indicate a problem, or one unlabeled red light that came on whenever anything at all was wrong? If you had that latter light and it could mean anything from your gas being low to you being out of wiper fluid to imminent destruction of your transmission, I bet you’d just start ignoring it after a while.

Don’t Share State Between Your Tests

There is no more surefire way to drive yourself insane at some future date than by storing some kind of application state among unit tests being executed. What I mean is if you have some test A that declares sets a global counter variable to 1, and then you have another test B that depends on the global counter being set to 1 in order for it to succeed, you are in for a world of hurt.

The problem is that there is no guarantee that the unit test runner will execute the tests in any particular order. What’s likely to happen is that your tests get executed in a particular order whenever you run them on your machine, so everything goes fine. But when the build machine runs them they fail. Weird. So you check them on your friend Bob’s machine, and they pass there. But on Alice’s machine, they fail. If you didn’t already know why this was happening because I just told you, can you imagine how much of your hair you’d pull out? You’d probably be checking the IDE version on those machines, compiler information, OS settings, and God only knows what else. It’d be a wild goose chase.

And imagine if it worked on everyone’s machine initially and then six months later started failing occasionally on the build machine. Machine isn’t the only failing dimension — there’s also time. So please, whatever you do, do not have your unit tests depend on the execution of a previous test. This practice, more than any other, is likely to lead to a rage-quitting of unit testing as a practice where you simply take all of them out of the build.

Encourage Others and Keep Them Invested

This sounds like a strange one to round out the section, but it’s important. If you’re the only one fighting the good fight with unit tests, it becomes daunting and exasperating. Everyone else’s reaction to failing tests is annoyance and they’re waiting for excuses just to stop altogether. You wind up feeling that you’re in an adversarial relationship with the team (I speak from experience here). But if you get others to buy in, you’re not shouldering the burden alone and you have help keeping the suite healthy and helpful.

Build Integration

When you first start out unit testing, the tests will be sort of disorganized and haphazard. You’ll write a few to get the hang of it and then maybe discard them. After a bit of that, you’ll start checking them into your solution (unless you’re an incorrigible weirdo or a liar). You do that, and the suite grows and, ideally, everyone is running it locally to keep things clean and be notified of potential breaking changes.

But you have to take it beyond that at some point if you want to realize the full value of the unit tests. They can’t just be a thing everyone remembers to do locally on pain of nagging emails or because someone will buy the team donuts or some other peer-pressure-oriented demerit system. Failing unit tests have to have real (read: automated) consequences. And the best way to do this is to make it so that failing unit tests mean a failing build.

If you’re in a shop that’s not as formal, this may be difficult at first. One handicap may be that you’re reading this and saying “what do you mean by ‘the build?'” If what you do is write code and take some kind of executable out of your project’s output directory on your machine and push it to a server or to your users, you’ve got some work to do before you think about integrating unit tests. You need a build. A build is an automated process by which your source code is turned into a production-ready, deployable package. And it’s automated in the sense that it doesn’t involve you hitting Ctrl-Shift-B or Ctrl-F6 or whatever you do manually in your IDE to build. The Build, with a capital B, is a process that checks your code out of source control, builds it, runs checks and whatever else is necessary, perhaps increments the versioning of the executables, etc., and then spits out the final product that will be pushed to a server or burned onto a DVD or whatever. If you want to read more about build tools, you can google around about TeamCity, CruiseControl, TFS, FinalBuilder, Jenkins etc. And you don’t have to use a product like that — you can create your own using shell scripts or code if you choose.

Because of all the different options when it comes to programming languages, unit test technologies and build tools, I’m not going to offer a tutorial on how to integrate unit tests into your build. To be comprehensive, I’d need to give dozens of such tutorials. But what I will say is that your integration is going to take the same basic format no matter what tools you’re using. The build is a series of steps that passes if everything goes smoothly and the deliverables are ultimately generated. If a step in the build fails, then the build itself fails. What you need to do is add a step that involves running the unit tests. With this in place, you’re creating a situation where any failing unit test means that the entire build fails.

Conceptually, this is pretty straightforward. Unit test runners can be run in command line fashion and they’ll generate a return value of some kind. So the build tool needs to examine the test runner’s output for an error code. If it finds one, it puts the brakes on the whole operation.

It may seem extreme at first to torpedo the whole build because of a failing unit test, but when you think about it, what else should possibly happen? Why would you want a process that allowed you to ship code knowing that it was defective in a way that it didn’t used to be? That’s amateur hour. And, what’s more is that if your team starts understanding that failed unit tests mean a failed build they’ll be sure to run the tests before check-in so that they don’t fail. It will become a natural part of your process, and the quality of your software will be dramatically improved for it.

By

Static Analysis, NDepend, and a Pluralsight Course

I absolutely love statistics. Not statistics as in the school subject — I don’t particularly love that branch of mathematics with its binomial distributions and standard deviations and whatnot. I once remarked to a friend in college that statistics-the-subject seemed like the ‘science’ of taking a guess and then rigorously figuring out how wrong you were. Flippant as that assessment may have been, statistics-the subject has hardly the elegant smoothness of calculus or the relentlessly logical pursuit of discrete math. Not that it isn’t interesting at all — to a math geek like me, it’s all good — but it just isn’t really tops on my list.

But what is fascinating to me is tabulating outcomes and gamification. I love watching various sporting events on television and keep track of odd things. When watching a basketball game, I always the amount of a “run” the teams are on before the announcers think to say something like “Chicago is on a 15-4 run over the last 6:33 this quarter.” I could have told you that. In football, if the quarterback is approaching a fist half passing record, I’m calculating the tally mentally after every play and keeping track. Heck, I regularly watch poker on television not because of the scintillating personalities at the tables but because I just like seeing what cards come out, what hands win, and whether the game is statistically normal or aberrant. This extends all the way back to my childhood when things like my standardized test scores and my class rank were dramatically altered by me learning that someone was keeping score and ranking them.

I’m not sure what it is that drives this personality quirk of mine, but you can imagine what happened some years back when I discovered static analysis and then NDepend. I was hooked. Before I understood what the Henderson Sellers Lack of Cohesion in Methods score was, I knew that I wanted mine to be lower than other people’s. For those of you not familiar, static analysis is a way to examine your code without actually executing it and seeing what happens retroactively. Static analysis, (over) simplified, is an activity that examines your source code and makes educated guesses about how it will behave at runtime and beyond (i.e. maintenance). NDepend is a tool that performs static analysis at a level and with an amount of detail that makes it, in my opinion, the best game in town.

After overcoming an initial pointless gamification impulse, I learned to harness it instead. I read up on every metric under the sun and started to understand what high and low scores correlated with in code bases. In other words, I studied properties of good code bases and bad code bases, as described by these metrics, and started to rely on my own extreme gamification tendencies in order to drive my work toward better code. It wasn’t just a matter of getting in the habit of limiting my methods to the absolute minimum in size or really thinking through the coupling in my code base. I started to learn when optimizing to improve one metric led to a decline in another — I learned lessons about design tradeoffs.

It was this behavior of seeking to prove myself via objective metrics that got me started, but it was the ability to ask and answer lots of questions about my code base that kept me coming back. I think that this is the real difference maker when it comes NDepend, at least for me. I can ask questions, and then I can visualize, chart and track the answer in just about every conceivable way. I have a “Moneyball” approach to code, and NDepend is like my version of the Jonah Hill character in that movie.

Because of my high opinion of this tool and its importance in the lives of developers, I made a Pluralsight course about it. If you have a subscription and have any interest in this subject at all, I invite you to check it out. If you’re not familiar with the subject, I’d say that if your interest in programming breaks toward architecture — if you’re an architect or an aspiring architect — you should also check it out. Static analysis will give you a huge leg up on your competition for architect roles, and my course will provide an introduction for getting started. If you don’t have a Pluralsight subscription, I highly recommend trying one out and/or getting one. This isn’t just a plug for me to sell a course I’ve made, either. I was a Pluralsight subscriber and fan before I ever became an author.

If you get a chance to check it out, I hope you enjoy.

By

Module Boundaries and Demeter

I was doing a code review recently, and I saw something like this:

public class SomeService
{
    public void Update(Customer customer)
    {
        //Do update stuff
    }

    public void Delete(int customerId)
    {
        //Do delete stuff
    }
}

What would you say if you saw code like this? Do you see any problem in the vein of consistent abstraction or API writing? It’s subtle, but it’s there (at least as far as I’m concerned).

The problem that I had with this was the mixed abstraction. Why do you pass a Customer object to Update and an integer to Delete? That’s fairly confusing until you look at the names of the variables. The method bodies are elided because they shouldn’t matter, but to understand the reason for the mixed abstraction you’d need to examine them. You’d need to see that the Update method uses all of the fields of the customer object to construct a SQL query and that the corresponding Delete method needs only an ID for its SQL query. But if you need to examine the methods of a class to understand the API, that’s not a good abstraction.

A better abstraction would be one that had a series of methods that all had the same level of specificity. That is, you’d have some kind of “Get” method that would return a Customer or a collection of Customers and then a series of mutator methods that would take a Customer or Customers as arguments. In other words, the methods of this class would all be of the form “get me a customer” or “do something to this customer.”

The only problem with this code review was that I had just explained the Law of Demeter to the person whose code I was reviewing. So this code:

public void DeleteCustomer(int customerId)
{
    string theSqlQuery = "DELETE FROM Customer WHERE CustomerId = " + customerId;
    //Do some sql stuff...
}

was preferable to this:

public void DeleteCustomer(Customer customer)
{
    string theSqlQuery = "DELETE FROM Customer WHERE CustomerId = " + customer.Id;
    //Do some sql stuff...
}

The reason is that you don’t want to accept an object as a method parameter if all you do with it is use one of its properties. You’re better off just asking for that property directly rather than taking a needless dependency on the containing object. So was I a hypocrite (or perhaps just indecisive)?

Well, the short answer is “yes.” I gave a general piece of advice one week and then gave another piece of advice that contradicted it the next. I didn’t do this, however, because of caprice. I did it because pithy phrases and rules fail to capture the nuance of architectural decisions. In this case the Law of Demeter is at odds with providing a consistent abstraction. And, I value the consistent abstraction more highly, particularly across a public seam between modules.

What I mean is, if SomeService were an implementation of a public interface called ICustomerService, what you’d have is a description of some methods that manipulate Customer. How do they do it? Who knows… not your problem. Is the customer in a database? Memory? A file? A web service? Again, as consumers of the API we don’t know and don’t care. So because we don’t know where and how the customers are stored, what sense would it make if the API demanded an integer ID? I mean, what if some implementations use a long? What if Customers are identified elsewhere by SSN for deletion purposes? The only way to be consistent across module boundaries (and thus generalities) is to deal exclusively in domain object concepts.

The Law of Demeter is called the Principle of Least Knowledge. At its (over) simplest, it is a dot counting exercise to see if you’re taking more dependencies than is strictly necessary. This can usually be enforced by asking yourself if your methods are using any objects that they could get by without using. However, in the case of public facing APIs and module boundaries, we have to relax the standard. Sure, the SQL Server version of this method may not need to know about the Customer, but what about any scheme for deleting customers? A narrow application of the Law of Demeter would have you throw Customer away, but you’d be missing out by doing this. The real question to ask in this situation is not “what is the minimum that I need to know” but rather “what is the minimum that a general implementation of what I’m doing might need to know.”

By

Code Generation Seems Like a Failure of Vision

I think that I’m probably going to take a good bit of flack for this post, but you can’t win ’em all. I’m interested in contrary opinions and arguments because my mind could be changed. Nevertheless, I’ve been unable to shake the feeling for months that code generation is just a basic and fundamental design failure. I’ve tried. I’ve thought about it in the shower and on the drive to work. I’ve thought about it while considering design approaches and even while using it (in the form of Entity Framework). And it just feels fundamentally icky. I can’t shake the feeling.

Let me start out with a small example that everyone can probably agree on. Let’s say that you’re writing some kind of GUI application with a bunch of rather similar windows. And let’s say that mostly what you do is take all of presentation logic for the previous window, copy, paste and adjust to taste for the next window. Oh noes! We’re violating the DRY principle with all of that repetition, right?

What we should be doing instead, obviously, is writing a program that duplicates the code more quickly. That way you can crank out more windows much faster and without the periodic fat-fingering that was happening when you did it manually. Duplication problem solved, right? Er, well, no. Duplication problem automated and made worse. After all, the problem with duplicate code is a problem of maintenance more than initial push. The thing that hurts is later when something about all of that duplicated code has to be changed and you have to go find and do it everywhere. I think most reading would agree that code generation is a poor solution to the problem of copy and paste programming. The good solution is a design that eliminates repetition and duplication of knowledge.

I feel as though a lot of code generation that I see is a prohibitive micro-optimization. The problem is “I have to do a lot of repetitive coding” and code generation solves this problem by saying, “we’ll automate that coding for you.” I’d rather see it solved by saying, “let’s step back and figure out a better approach — one in which repetition is unnecessary.” The automation approach puts a band-aid on the wound and charges ahead, risking infection.

For instance, take the concept of List in C#. List is conceptually similar to an array, but it automatically resizes, thus abstracting away an annoying detail of managing collections in languages from days gone by. I’m writing a program and I think I want an IntList, which is a list of integers. That’s going along swimmingly until I realize that I need to store some averages in there that might not be round numbers, so I copy the source code IntList to DoubleList and I do a “Find-And-Replace” with Int and Double. Maybe later I also do that with string, and then I think, “geez — someone should write a program that you just tell it a type and it generates a list type for it.” Someone does, and then life is good. And then, later, someone comes along with the concept of generics/templates and everyone feels pretty sheepish about their “ListGenerator” programs. Why? Because someone actually solved the core problem instead of coming up with obtuse, brute-force ways to alleviate the symptoms.

And when you pull back and think about the whole idea of code generation, it’s fairly Rube-Goldbergian. Let’s write some code that writes code. It makes me think of some stoner ‘brainstorming’ a money making idea:

Inve ntions

I realize that’s a touch of hyperbole, but think of what code generation involves. You’re going to feed code to a compiler and then run the compiled program which will generate code that you feed to the compiler, again, that will output a program. If you were to diagram that out with a flow chart and optimize it, what would you do? Would you get rid of the part where it went to the compiler twice and just write the program in the first place? (I should note that throughout this post I’ve been talking about this circular concept rather than, say, the way ASP or PHP generate HTML or the way Java compiles to bytecode — I’m talking about generating code at the same level of abstraction.)

The most obvious example I can think of is the aforementioned Entity Framework that I use right now. This is a framework utility that uses C# in conjunction with a markup language (T4) to generate text files that happen to be C# code. It does this because you have 100 tables in your database and you don’t want to write data transfer objects for all of them. So EF uses reflection and IQuerable with its EDMX to handle the querying aspect (which saves you from the fate we had for years of writing DAOs) while using code generation to give you OOP objects to represent your data tables. But really, isn’t this just another band-aid? Aren’t we really paying the price for not having a good solution to the Impedance Mismatch Problem?

I feel a whole host of code gen solutions is also born out of the desire to be more performant. We could write something that would look at a database table and generate, on the fly, using reflection, a CRUD form at runtime for that table. The performance would be poor, but we could do it. However, confronted with that performance, people often say, “if only there were a way to automate the stuff we want but to have the details sorted out at compile time rather than runtime.” At that point the battle is already won and the war already lost, because it’s only a matter of time until someone writes a program whose output is source code.

I’m not advocating a move away from code generating, nor am I impugning anyone for using it. This is post more in the same vein as ones that I’ve written before (about not using files for source code and avoiding using casts in object oriented languages). Code generation isn’t going anywhere anytime soon, and I know that I’m not even in a position to quit my reliance on it. I just think it’s time to recognize it as an inherently flawed band-aid rather than to celebrate it as a feat of engineering ingenuity.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

Notes on Job Hopping Part 4: Free Agency

I’ve been very busy of late, so I had let this series slip a bit by the wayside. But I was taking a break from recording my Pluralsight course to sort of mindlessly read tweets when I noticed this one:

 

Juxtapositions as Outrage Factories

There is a fine line in life when it comes to juxtapositions. On one side of the line is genuine profundity, and on the other side is schmaltzy non sequitur and/or demagoguery. And I’d argue that the line is not entirely subjective, as it might seem. There is a Tupac Shakur lyric that is sort of raw and powerful that says, “they’ve got money for wars but can’t feed the poor.” If you’re a bleeding heart type, you’ll probably agree wholeheartedly. If you’re more of a pragmatist, you might say, “well, there are degrees of poverty and need, and what good is feeding people if some foreign entity is lobbing cruise missiles at them?” But you have to admit that he’s comparing apples to apples, so to speak. There is some finite pool of money, and some of that money is being spent on blowing one another up instead of feeding people that are hungry.

On the other side of this line of juxtaposition lie tiresome canards like “they can send a man to the moon but they can’t make a computer that doesn’t crash?!?” Yeah, because those are the same engineering problem. Or how about “a country can’t run a fiscal deficit because I don’t run my own budget that way at home with the groceries and the car payments!” Right, because macroeconomics is like your budget spreadsheet. And you also have the ability to print money to pay off your debts.

Back to the tweet. I think it sits right on the juxtaposition line. On the one hand, it seems to be sort of garden-variety line-employee populism (“pointy-haired bosses bad, knowledge workers good”) that ignores an important difference between athlete-coach and employee-manager — namely the inverted money and fame related power dynamic of athletes and coaches as compared to line employees and managers. LeBron James, not Erik Spoelstra, puts butts in seats and gets paid a king’s ransom, even if Spoelstra does theoretically call the shots. But it does raise an interesting question that isn’t the one that’s literally being asked (the answer to the tweet’s question being easy: managers make more money, have offices and get to boss people around).

Overvaluing Management

The interesting question that is raised is “why are line managers and other overhead personnel overvalued?” Let me say here that I’m not going to spend this post talking in much detail about that. I poured a lot of time and thought into it for the concluding part of my Expert Beginner E-Book, so I’ll summarize here by saying that there is a bit of a societal pyramid scheme that occurs when it comes to work. We collectively agree on a system where 20-somethings that are new to the workforce do all the grunt work for little pay and that, over the course of our careers, we accumulate vacation time, bigger salaries, larger desks and offices, and the ‘right’ to tell others what to do rather than doing it ourselves. We pat ourselves on the back for ‘earning’ it, but often that assessment is pretty questionable. It’s more a matter of waiting our turn.

Managers are overvalued in organizations because of a collective endorsement of the same kind of reasoning that drives social security — disproportionate dues paying now, followed by disproportionate dues receiving later. The entry level folks tolerate overvalued mid-level management either because they have no choice or because they someday want to be that overvalued mid-level management which is, of course, a fairly sweet deal once you get into the club.

But the overvaluation of management goes deeper than that and finds its real roots in the undervaluation of non-management. In other words, if the Product X Team, consisting of five knowledge workers and a line manager, delivers a spectacular and wildly profitable success, is this owed to the team or to the manager? I’d say that, in a very real way, this is comparable to sports teams in that a good manager will account for some variance and some wins here and there by managing egos, strategizing and motivating, but at the end of the day no one is going to manage a hopelessly underqualified team into serious contention. But, unlike sports teams, the spotlight in the corporate world often falls onto the manager because the manager is in a position to seize it and because it’s simply easier to give credit to the “leader” than to parcel it out in equal portions to the team.

So why do young athletes aspire to be LeBron James and not Eric Spoelstra while young academics want to be the boss rather than the inventor? Because Steve Jobs. Because Warren Buffet. Because bosses, like athletes, represent the competitive pinnacle. It’s really the same thing in the end — a desire to dominate. Athletes, CEOs and, to a lesser extent, line managers, have gotten to where they are by defeating foes in direct competition, and that’s appealing to children in an environment where peer competition is natural and amplified (grade school).

Positive Sum Makers

The overvaluation of managers (and athletes, actually) is a zero-sum outlook on the world. You become a success by competing against others and causing them to fail. There are winners and losers and the glory lies in being a winner in this game. But there is another avenue, which is that taken by who I’d call the Inventor or the Maker — the positive sum player. Makers (and I use this umbrella to describe the overwhelming majority of line-level programmers as well as engineers and other people who produce work product) shine by making the world a better place for all. They transmute their creativity and ambition into products and services that improve the standard of living and better people’s lives. There doesn’t need to be a loser in this game for the Maker to be a winner.

Make no mistake — there is certainly competition among Makers. But it is a healthy, positive sum competition. They compete against one another and themselves to invent great things, to do it quickly, and to do it well. I may want to be the best programmer on earth and that desire may drive me to some late nights hitting the books and cranking out code, but I can produce helpful things without “defeating” some other programmer in some kind of game or competition for position.

The interplay between Makers and what I’ll call Competitors has historically been an uneasy balance. In the time before widespread knowledge workers in corporations, you had the mad-scientist/inventor archetype as your Makers: the Edisons and Teslas of the world. Then with the technological growth of the 20th century, the Makers were kind of funneled into working as Organization Man where they went from being valued professionals in the 50s to eventually being Dilbert in more recent times — toiling under the inept stewardship of a pointy haired boss that also happened to be a marginally victorious Competitor.

Makers were in a difficult position, as they really just wanted to make. Working your way up the corporate ladder as Competitor requires stopping Making and engaging in zero sum gamesmanship that doesn’t interest that archetype, but not doing so meant obeying the micromanaging and often incompetent Competitors and having the fun sucked out of the act of making. Finding organizations that got this right has become so rare that places like Valve and Github are the stuff of legends. Historically, Makers could try going off on their own, but that meant giving up a lot of the Making as well, since they’d then have to worry about their own sales, marketing, accounting, etc.

But the Makers are going to have the last laugh.

“Developernomics”

For Makers that happen to be software developers, times are pretty good, and they’ve been pretty good for years. Most developers complain about how annoying it is that recruiters won’t stop calling them to try to get them to go to interviews for jobs that will pay them more money. Let me repeat that. Developers complain that companies won’t leave them alone when it comes to offering them work. The reason that this is happening is that the demand for programming is absolutely exploding as we transition into a world where the juggernaut of endless automation has finally lurched up to ramming speed and is methodically wiping out other types of jobs at an unbelievable rate. Quite simply, they days of any company not being a “software company” are drawing to a close, as described in an article entitled “Developernomics.

Flush with opportunities, developer job hopping is accelerating rapidly. Companies are ceasing to bother with asking developers why their resumes feature so much job hopping — they’re so hard up for programmers that they don’t bother to ask. More and more, developers are seizing on any excuse or any annoyance to fly the coop and go work on something else. This might be organizational stupidities, but it also might simply be something like “I’m tired of C# and want to try out Ruby for a while.” The mobility within the job market for developers is pretty much unprecedented for a Maker position.

In fact, it’s starting to look a lot more like another type of profession that is also not zero sum: high skill jobs for which there is virtually inelastic demand. Two that come to mind are doctors and lawyers. As long as the world has sick people it needs doctors, and as long as there are disputes (and lawyers making laws requiring lawyers for things), it needs lawyers (at least until the automation juggernaut automates both of these professions — I put an upper bound of 50 years on the full automation of the medical profession making human doctors obsolete). Both doctors and lawyers have a different sort of work model than most Competitors and Makers. They get a lot of education and then they start their own practices, band together in small groups, or join a large existing practice. But whichever option they choose, their affiliations tend to be very fluid and their ability to work for themselves almost a given, if they so choose.

Let’s stop for a second now. These high skill knowledge workers are highly in demand, have a great deal of fluidity in their working relationships and association, and often work for themselves. Doesn’t that sound familiar? Doesn’t it sound like software these days where some people freelance, some people bounce around among startups, and others just job hop between larger corporations to get themselves promoted and paid quickly? And doesn’t it seem like more and more developers are working at shops that are just software development consultancies? Isn’t it starting to seem like the question shouldn’t be “is job hopping okay,” but rather be “for how much longer are we going to bother with typical corporate jobs from which to hop?”

I think the handwriting is on the wall for the future of software development. I think that we’re careening toward a future where developers working for corporations for any length of time is such an anachronism that it isn’t considered a serious possibility. Developers aren’t going to job hop at all because they won’t have traditional corporate jobs. Due to increased globalization, networking, and interconnectivity, developers have sort of a de facto guild — their association with the global network of developers. Promotion, marketing, sales, and even some other aspects of managing one’s own consultancies are collectivized to a degree as developers have a network of friends to help them with such things. They often become moot points because who needs to bother with sales and marketing when business is banging down your door already?

So to return to the sports world and metaphor that started all of this, I’d say the future of developers doesn’t involve an ebb in “job hopping” but rather the opposite: a codified establishment of extreme job hopping as the status quo. Developers are going to become free agents like athletes who drift from team to team as their title aspirations and salary negotiations dictate. Like mercenary athletes, developers are not especially interested in things like culture, domain knowledge, corporate slogans and mission statements and all of that company man, corporate identity stuff. They’re interested in challenging projects, learning, making a few bucks and heading out to the next gig. In a previous post in this series, I had said the answer to the question “should I job hop” is “probably.” In the future, I think the answer to this question for developers will be another question: “uh, as opposed to what?”