DaedTech

Stories about Software

By

Notes on Job Hopping: Millennials and Their Ethics

For those that have been reading my more recent posts, which have typically been about broad-level software design or architecture concerns, I should probably issue a rant alert. This is somewhat of a meandering odyssey through the subject of the current prevalence of job hopping, particularly among the so-called millenial generation.

I thought I might take a whack today at this rather under-discussed subject in the field of software development. It’s not that I think the subject is particularly taboo, especially when discussed in blog comments or discussion forums as opposed to with one’s employer. I just think the more common approach to this subject is sort of quietly to pretend that it applies to people abstractly and not to anyone participating in a given conversation. This is the same way one might approach discussing the “moral degradation of society” — it’s a thing that happens in general, but few people look immediately around themselves and start assigning blame.

So what of job hopping and the job hopper? Is the practice as career threatening as ever it was, or is viewing it that way way a throwback to a rapidly dying age in the time of Developernomics and the developer as “king”? Is jumping around a good way to move up rapidly in title and pay, or is it living on borrowed time during an intense boom cycle in the demand for software development? Are we in a bubble whose bursting could leave the job hoppers among us as the people left standing without a chair when the music stops?

Before considering those questions, however, the ethics of job hopping bears some consideration. If society tends to view job hopping as an unethical practice, then the question of whether it’s a good idea or not becomes somewhat akin to the question of whether cheating on midterms in college is a good idea or not. If you do it and get away with it, the outcome is advantageous. Whether you can live with yourself or not is another matter. But is this a good comparison? Is job hopping similar to cheating?

To answer that question, I’d like to take a rather indirect route. I think it’s going to be necessary to take a brief foray into human history to see how we’ve arrived at the point that the so-called “millenials,” the generation of people age 35 and younger or thereabouts, are the motor that drives the software development world. I’ve seen the millenials called the “me generation,” but I’ve also seen that label applied to baby boomers as well. I’d venture a guess that pretty much every generation in human history has muttered angrily about the next generation in this fashion shortly after screaming at people to leave their collective lawn. “They’re all a bunch of self-involved, always on our lawn, narcissist, blah, blah, blah, ratcha-fratcha kids these days…”

It’s as uninventive as it is emblematic of sweeping generalizations, and if this sort of tiresome rhetoric were trotted out about a gender or racial demographic rather than an age-based one, the speaker would be roundly dismissed as a knuckle-dragging crank. But beneath the vacuous stereotyping and “us versus them” generational pissing matches lie some real and interesting shifting ethical trends and philosophies. And these are the key to understanding the fascinating and subtle shifts in both generational and general outlook toward employment.

Throughout most of human history, choice (about much of anything) was the province of the rich. Even in a relatively progressive society, such as ancient Greece, democracy was all well and good for land-owning, wealthy males. But everyone else was kind of out in the cold. People hunted and farmed, worked as soldiers and artisans, and did any number of things when station in life was largely determined by pragmatism, birth, and a lack of specialization of labor. And so it went pre-Industrial Revolution. Unless you were fortunate enough to be a noble or a man of wisdom, most of your life was pretty well set in place: childhood, apprenticeship/labor, marriage, parenthood, etc.

Even with the Industrial Revolution, things got different more than they got better for the proles. The cycle of “birth-labor-marriage-labor-parenthood-labor-death” just moved indoors. Serfs graduated to wage slaves, but it didn’t afford them a lot of leisure time or social mobility. As time marched onward, things improved in fits and starts from a labor-specialization perspective, but it wasn’t until a couple of world wars took place that the stars aligned for a free-will sea change.

Politics, technology, and and the unionized collective bargaining movement ushered in an interesting time of post-war boom and prosperity following World War II. A generation of people returned from wars, bought cars, moved to suburbs, and created a middle class free from the share-cropping-reminiscent, serf-like conditions that had reigned throughout human history. As they did all of this, they married young, had lots of children, settled down in a regular job and basically did as their parents had as a matter of tradition.

And why not? Cargo cult is what we do. Millions of people don’t currently eat shellfish and certain kinds of meat because doing so thousands of years ago killed people, and religious significance was ascribed to this phenomenon. A lot of our attitudes toward human sexuality were forged in the fires of Medieval outbreaks of syphilis. Even the “early to bed, early to rise” mantra and summer breaks for children so ingrained in our cultures are just vestigial throwbacks to years gone by when most people were farmers. We establish practices that are pragmatic. Then we keep doing them just because.

But the WWII veterans gave birth to a generation that came of age during the 1960s. And, as just about every generation does, this generation began superficially to question the traditions of the last generation while continuing generally to follow them. These baby boomers staged an impressive series of concerts and protests, affected real social policy changes, and then settled back into the comfortable and traditional arrangements known to all generations. But they did so with an important difference: they were the first generation forged in the fires of awareness of first-world, modern choice.

What I mean by that is that for the entirety of human history, people’s lots in life were relatively predetermined. Things like work, marriages, and having lots of children were practical necessities. This only stopped being true for the masses during the post-WWII boom. The “greatest generation” was the first generation that had choice, but the boomers were the first generation to figure out that they had choice. But figuring things like that out doesn’t really go smoothly because of the grip that tradition holds over our instinctive brains.

So the boomers had the luxury of choice and the knowledge of it, to an extent. But the old habits died hard. The expression of that choice was alive in the 1960s and then gradually ran out of steam in the 1970s. Boomers rejected the traditions and trappings of recorded human history, but then, by the 80s, they came around. By and large, they were monogamous parents working steady jobs, in spite of the fact that this arrangement was now purely one of comfort rather than necessity. They could job hop, stay single, and have no children if they chose, and they wouldn’t be adversely affected in the way a farmer would have in any time but modernity.

But even as they were settling down and seeing the light from a traditional perspective, a kind of disillusionment set in. Life is a lot harder in most ways when you don’t have choices about your fate, but strangely easier in others. Once you’re acing the bottom levels of Maslow’s Hierarchy, it becomes a lot easier to think, “if only I had dated more,” or, “I’m fifty and I’ve given half my life to this company.” And, in the modern age of choices, the boomers had the power to do something about it. And so they did.

In their personal lives, they called it quits and left their spouses. In the working world, they embarked on a quest of deregulation and upheaval. In the middle of the 20th century, the corporation had had replaced the small town as the tribal unit of collective identity, as described in The Organization Man. The concept of company loyalty and even existential consistency went out the window as mergers and acquisitions replaced blue chip stocks. The boomers became the “generation of divorce.” Grappling with tradition on one side and choice on the other, they tried to serve both masters and failed with gritty and often tragic consequences.

And so the millenials were the children of this experience. They watched their parents suffer through messy divorces in their personal lives and in their professional lives. Companies to which their parents had given their best years laid them off with a few months of severance and a pat on the butt. Or perhaps their parents were the ones doing the laying off — buying up companies, parceling them up and moving the pieces around. Whether personal or corporate, these divorces were sometimes no-fault, and sometimes all-fault. But they were all the product of heretofore unfamiliar amounts of personal choice and personal freedom. Never before in human history had so many people said, “You know what, I just figured out after 30 years that this isn’t working. So screw it, I’m out of here.”

So returning to the present, I find the notion that millenials harbor feelings of entitlement or narcissism to be preposterous on its face. Millenials don’t feel entitlement — they feel skepticism. They hesitate to commit, and when they do, they commit lightly and make contingency plans. They live with their parents longer rather than committing to the long-term obligation of a mortgage or even a lease. They wait until they’re older to marry and have children rather than wasting their time and affections on starter spouses and doomed relationships. And they job hop. They leave you before you can leave them, which, as we both know, you will sooner or later.

That generally doesn’t sit well with the older generation for the same reasons that the younger generation’s behavior never sits well with the older one. The older generation thinks, “man, I had to go through 20 years of misery before I figured out that I hated my job and your mother, so who are you to think you’re too good for that?” It was probably the same way their parents got angry at them for going to Woodstock instead of settling down and working on the General Motors assembly line right out of high school. Who were they to go out cavorting at concerts when their parents had already been raising a family after fighting in a war at their age?

So we can circle back around to the original questions by dismissing the “millenials are spoiled” canard as a reason to consider modern job hopping unethical. Generational stereotyping won’t cut it. Instead, one has to consider whether some kind of violation of an implied quid pro quo happens. Do job hoppers welch on their end of a bargain, leaving a company that would have stayed loyal to them were the tables turned? I think you’d be hard pressed to make that case. Individuals are capable of loyalty, but organizations are capable of only manufactured and empty bureaucratic loyalty, the logical outcome of which is the kind of tenure policies that organized labor outfits wield like cudgels to shield workers from their own incompetence. Organizations can only be forced into loyalty at metaphorical gunpoint.

Setting aside both the generational ad hominem and the notion that job hopping is somehow unfair to companies, I can only personally conclude that there is nothing unethical about it and that the consideration of whether or not to job hop is purely pragmatic. And really, what else could be concluded? I don’t think that much of anyone would make the case that leaving an organization to pursue a start-up or move across the country is unethical, so the difference between “job leaver” and “job hopper” becomes purely a grayscale matter of degrees.

With the ethics question in the books on my end, I’ll return next time around to discuss the practical ramifications for individuals, as well as the broader picture and what I think it means for organizations and the field of software development going forward. I’ll talk about the concept of free agency, developer cooperation arrangements, and other sort of free-wheeling speculation about the future.

By

Moving Away from State: State–

In 1968, Edsger Dijkstra issued a letter entitled “Go To Statement Considered Harmful,” and the age of structured programming was born. The letter was a call to stop programmers of the time from creating ad-hoc control flow structures using goto statements and to instead use these higher level constructs for manipulating flow through a function (contrary, I think, to the oft-attributed position that “goto is evil”). This gave rise to structured programming because it meant that progress in a method would be more visually trackable.

But I think there’s an interesting underlying concept here that informs a lot of shifts in programming practice. Specifically, I’m referring to the idea that Dijkstra conceived of “goto” as existing at an inappropriate level of abstraction when stacked with concepts like “if” and “while” and “case.” Those latter are elements of logical human reasoning while the former is a matter of procedure for a compiler. Or, to put it another way, the control flow statements tell humans how to read business logic, and the goto tells the compiler how to execute a program.

“State Considered Harmful” seems to be the new trend, ushering in the renaissance of functional programming. Functional programming itself is not a new concept. Lisp has been around for half a century, meaning that it actually predates object-oriented programming. Life always seems to be full of cycles, and this is certainly an example. But there’s more to it than people seeking new solutions in old ideas. The new frontier in faster processing and better computer performance is parallel processing. We can’t fit a whole lot more transistors on a chip, but we can fit more chips in a computer — and design schemes to split the work between them. And in order to do that successfully, it’s necessary to minimize the amount of temporarily stored information, or state.

I’ve found myself headed in that direction almost subconsciously. A lot of the handiest tools push you that way. I’ve always loved the fluent Linq methods in C#, and those generally serve as a relatively painless introduction to functional programming. You find yourself gravitating away from nested loops and local variables in favor of chained calls that express semantically what you want in only a line or two of code. But gravitating toward functional programming style goes beyond just using something like Linq, and it involves favoring chains of methods in which the output is a pure, side-effect-free function of the input.

EmptyGlass

Here have been some of the stops on my own journey away from state.

  1. Elimination of global state. The first step in this journey for me was to realize how odious global state is for the maintainability of an application.
  2. No state variables for communication between methods. Next to go for me were ‘flag’ variables that kept track of things in a class between method calls. As I became more proficient in unit testing, I found this to be a huge headache as it created complicated and brittle test setup, and it was really a pointless crutch anyway.
  3. Immutable > mutable. I’ve blogged about an idea I called “pointless mutability,” but in general I’ve come to favor immutable constructs over mutable ones whenever possible for simplicity.
  4. State isolation — for instance, model objects and domain objects for business logic state and viewmodels/controllers for GUI state. Aside from that, a lot of applications for which I am the architect retain virtually no state information. Services and other application scaffolding types of classes simply have interfaced references to their collaborators, but really keep track of nothing between method calls.
  5. Persistence ignorance — letting the application pretend that its state is storage. For less sophisticated (CRUD-style) applications, I’ve favored scenarios in which most of the code’s state is abstracted into a lower layer of the application and implemented only externally. In other words, if things are simple, let something like a database or file system be your application’s state. Why cache and over-complicate until/unless performance is an issue?

And that’s where I stand now as I write object-oriented code. I am interested in diving more into functional languages, as I’ve only played with them here and there since my undergrad days. It isn’t so much that they’re the new hotness as it is that I find myself heading that way anyway. And if I’m going to do it, I might as well do it consciously and in directed fashion. If and when I do get to do more playing, you can definitely bet that I’ll post about it.

By

Introduction to Unit Testing Part 6: Test Doubles

In the last two posts in this series, I talked about how to test new code in your code base and then how to bring your legacy code under test. Toward the end of the last chapter in this series, I talked a bit about the concept of test doubles. The example I showed was one in which I used polymorphism to create a “dummy” class that I used in a test to circumvent otherwise untestable code. Here, I’ll dive into a lot more detail on the subject, starting out with a much simpler example than that and building to a more sophisticated way to handle the management of your test doubles.

First, a Bit of Theory

Before we get into test doubles, however, let’s stop and talk about what we’re actually doing, including theory about unit tests. So far, I’ve showed a lot of examples of unit tests and talked about what they look like and how they work (for instance, here in post two where I talk about Arrange, Act Assert). But what I haven’t addressed, specifically, is how the test code should interact with the production code. So let’s talk about that a bit now.

By far the most common case when unit testing is that you instantiate a class under test in the “arrange” part of your unit test, and then you do whatever additional setup is necessary before calling some method on that class. Then you assert something that should have happened as a result of that method call. Let’s return to the example of prime finder from earlier and look at a simple test:

[TestMethod]
public void Returns_False_For_One()
{
    var primeFinder = new PrimeFinder(); //Arrange

    bool result = primeFinder.IsPrime(1); //Act

    Assert.IsFalse(result); //Assert
}

This should be reviewed from the perspective of “arrange, act, assert,” but let’s look specifically at the “act” line. Here is the real crux of the test; we’re writing tests about the IsPrime method and this is where the action happens. In this line of code, we give the method an input and record its output, so it’s the perfect microcosm for what I’m going to discuss about a class under test: its interactions with other objects. You see, unit testing isn’t about executing your code — you can do that with integration tests, console apps, or even just by running the application. Unit testing, at its core, is about isolating your classes and running experiments on them, as if you were a scientist in a lab. And this means controlling all of the inputs to your class — stimulus, if you will — so that you can observe what it puts out.

Controlling the inputs in the PrimeFinder class is simple. Because I’m telling you that there are no invocations of global/static state (which will become an important theme as we proceed). You can see by looking at the unit test that the only input to the class under test (CUT) is the integer 1. This means that the only input/stimulus that we supply to the class is a simple integer, making it quite easy to make assertions about its behavior. Generally speaking, the simpler the inputs to a class, the easier that class is to test.

There are Inputs and There are Inputs

Omitting certain edge cases I can think of (and probably some that I’m not thinking of), let’s consider a handful of relatively straightforward ways that a class might get ahold of input information. There is what I did above — passing it into a method. Another common way to give information to a class is to use constructor parameters or setter methods/properties. I’ll refer to these as “passive collaboration” from the perspective of the CUT, since it’s simply being given the things that it needs. There is also what I’ll call “semi-passive collaboration,” which is when you pass a dependency to the CUT and the CUT interacts in great detail with that dependency, mutating its state and querying it. An example of this would be “Car theCar = new Car(new Engine())”, in which performing operations on Car related to starting and driving result in rather elaborate modifications to the state of Engine. It’s still passive in the sense that you’re handing the Engine class to the car, but it’s not as passive as simply handing it an integer. In general, passive input is input that the scope instantiating the CUT controls — constructor parameters, method parameters, setters, and even things returned from methods of objects passed to the CUT (such as the Car class calling _engine.GetTemperature() in the example in this paragraph).

In contrast, there is also “active collaboration,” which is when the CUT takes responsibility for getting its own inpu. This is input that you cannot control when instantiating the class. An example of this is a call to some singleton or public static method in the CUT. The only way that you can reassume control is by not calling the method in which it occurs. If static/singleton calls occur in the constructor, you simply cannot test or even instantiate this class without it doing whatever the static code entails. If it retrieves values from static state, you have no control over those values (short of mocking up the application’s global state).

A second form of active collaboration is the “new” operator. This is very similar to static state in that when you create the CUT, you have no control over this kind of input to the CUT. Imagine if Car new-ed up its own Engine and queried it for temperature. There would be absolutely no way that you could have any effect on this operation in the Car class short of not instantiating it. Like static calls, object instantiation renders your CUTs a non-negotiable, “take it or leave it” proposition. You can have them with all of their instantiated objects and global state or you can write your own, buddy.

Not all inputs to a class are created equal. There are a CUT’s passive inputs, in which the CUT cedes control to you. And then there are the CUT’s active inputs that it controls and on which it does not allow you to interpose in any way. As it turns out, it is substantially easier to test CUTs with exclusively passive collaboration/input and difficult or even impossible to test CUTs with active collaboration. This is simply because you cannot isolate actively collaborating CUTs.

Literals: Too Simple to Need Test Doubles

There’s still a little bit of work to do before we discuss test doubles in earnest. First, we have to talk about inputs that are too simple to require stand-ins: literals. The PrimeFinder test above is the perfect example of this. It’s performing a mathematical operation using an integer input, so what we’re interested in testing is known input-output pairs in a functional sense. As such, we just need to know what to pass in, to pass that value in, and then to assert that we get the expected return value.

In a strict sense, we could refer to this as a form of test double. After all, we’re doing a non-production exercise with the API, so the value we’re passing in is fake, in a sense. But that’s a little formal for my taste. It’s easier just to think in terms of literals almost always being too simple to require any sort of substitution of behavior.

An interesting exception to this the null literal (of null type) or the default value of a non-nullable type. In many cases, you may actually want to be testing this as an input since null and 0 tend to be particularly interesting inputs and the source of corner cases. However, in some cases, you may be supplying what is considered the simplest form of test double: the dummy value. A dummy value is something you pass into a function to say, “I don’t care what this is and I’m just passing in something to make the compiler happy.” An example of where you might do this is passing null to a constructor of an object instance when you just want to make assertions as to what some of its property values initialize to.

Simple/Value Objects and Passing in Friendlies

Next up for consideration is the concept of a “test stub,” or what I’ll refer to in the general sense as a “friendly.”

Friendlies

Take a look at this code:

public class Car
{
    public int EngineTemperature { get; private set; }

    public Car(Engine engine)
    {
        EngineTemperature = engine.TemperatureInFahrenheit;
    }
}
public class Engine
{
    public int TemperatureInFahrenheit { get; set; }
}

Here is an incredibly simple implementation of the Car-Engine pair I described earlier. Car is passed an Engine and it queries that Engine for a local value that it exposes. Let’s say that I now want to test that behavior. I want to test that Car’s EngineTemperature property is equal to the Engine’s temperature in fahrenheit. What do you think is a good test to write? Something like this, maybe —

[TestMethod]
public void EngineTemperature_Initializes_Value_Returned_By_Engine()
{
    const int engineTemperatureFromEngine = 200;
    Engine engine = new Engine() { TemperatureInFahrenheit = engineTemperatureFromEngine };
    var car = new Car(engine);

    Assert.AreEqual(engineTemperatureFromEngine, car.EngineTemperature);
}

Here, we’re setting up the Engine instance in such a way as that we control what it provides to Car when Car uses it. We know by inspecting the code for Car that Car is going to ask Engine for its TemperatureInFahrenheit value, so we set that value to a known commodity, allowing us to compare in the Assert. To put it another way, we’re supplying input indirectly to Car by setting up Engine and telling Engine what to give to Car. It’s important to note that this is only possible because Car accepts Engine as an argument. If Car instantiated Engine in its constructor, it would not be possible to isolate Car because any test of Car’s initial value would necessarily also be a test of Engine, making the test an integration test rather than a unit test.

Creating Bonafide Mocks

That’s all well and good, but what if the Engine class were more complicated or just written differently? What if the way to get the temperature was to call a method and that method went and talked to a file or a database or something? Think of how badly the testing for this is going to go:

public class Car
{
    public int EngineTemperature { get; private set; }

    public Car(Engine engine)
    {
        EngineTemperature = engine.TemperatureInFahrenheit;
    }
}
public class Engine
{
    public int TemperatureInFahrenheit
    {
        get
        {
            var stream = new StreamReader(@"C:\whatever.txt");
            return int.Parse(stream.ReadLine());
        }
    }
}

Now, when we instantiate a Car and query its engine temperature property, suddenly file contents are being read into memory and, as I’ve already covered in this series, File I/O is a definite no-no in a unit test. So I suppose we’re hosed. As soon as Car tries to read Engine’s temperature, we’re going to explode — or we’re going to succeed, which is even worse because now you’ll have a unit test suite that depends on the machine it’s running on having the file C:\whatever.txt on it and containing an integer as its first line.

But what if we got creative the way we did at the end of the last episode of this series? Let’s make the TemperatureInFahrenheit property virtual and then declare the following class:

public class FakeEngine : Engine
{
    private int _temperature;

    public override int TemperatureInFahrenheit
    {
        get { return _temperature; }
    }

    public FakeEngine(int temperature)
    {
        _temperature = temperature;
    }

This class is test-friendly because it doesn’t contain any file I/O at all and it inherits from Engine, overriding the offending methods. Now we can write the following unit test:

[TestMethod]
public void EngineTemperature_Initializes_Value_Returned_By_Engine()
{
    const int engineTemperatureFromEngine = 200;
    Engine engine = new FakeEngine(engineTemperatureFromEngine);
    var car = new Car(engine);

    Assert.AreEqual(engineTemperatureFromEngine, car.EngineTemperature);
}

If this seems a little weird to you, remember that our goal here is to test the Car class and not the engine class. All that the Car class knows about Engine is that it wants its TemperatureInFahrenheit property. It doesn’t (and shouldn’t) care how or where this comes from internally to Engine — file I/O, constructor parameter, secret ink, whatever. And when testing the Car class, you certainly don’t care. Another way to think of this is that you’re saying, “assuming that Engine tells Car that the engine temperature is 200, we want to assert that Car’s EngineTemperature property is 200.” In this fashion, we have isolated the Car class and are testing only its functionality.

This kind of test double and testing technique is known as a Fake. We’re creating a fake engine to stand-in for the real one. It’s not simple enough to be a dummy or a stub, since it’s a real, bona-fide different class instead of a doctored version of an existing one. I realize that the terminology for the different kinds of test doubles can be a little confusing, so here’s a helpful taxonomy of them.

Mocking Frameworks

The last step in the world of test doubles is to get to actual mock objects. If you stop and ponder the fake approach from the last section a bit, a problem might occur to you. The problem has to do with long-term maintenance of code. I remember, many moons ago when I discovered the power of polymorphism for creating fake objects, that I thought it was the greatest thing under the sun. Obviously there was at least one fake per test class with dependency, and sometimes there were multiple dependencies. And I didn’t always stop there — I might define three or four different variants of the fake, each having a method that behaved differently for the test in question. In one fake, TemperatureInFarenheit would return a passed in value, but in another, it would throw an exception. Oh, there were so many fakes — I was swimming in fakes for classes and fakes for interfaces.

And they were awesome… until I added a method to the interface they implemented or changed behavior in the class they inherited. And then, oh, the pain. I would have to go and change dozens of classes. And then there was also the fact that all of this faking took up a whole lot of space. My test classes were littered with nested classes of fakes. It was fun at first, but the maintenance became a drudgery. But don’t worry, because my gift to you is to spare you that pain.

What if I told you that you could implement interfaces and inherit from classes anonymously, without actually creating source code that did this? I’d be oversimplifying a bit, but once you got past that, you’d probably be pretty excited. I say this because, as you start to grasp the concept of mocking frameworks, this kind of “dynamic interface implementation/inheritance” is the easiest way to reason about what it’s doing, from a practical perspective, without getting bogged down in more complicated concepts like reflection and direct work with byte-code and other bits of black magic.

As an example of this in action, take a look at how I go about testing the Car and Engine with the difficult dependency. The first thing that I do is delete the Fake class because there’s no need for it. The next thing I do is write a unit test, using a framework called JustMock by Telerik (this is currently my preferred mocking framework for C#).

[TestMethod]
public void EngineTemperature_Initializes_Value_Returned_By_Engine()
{
    const int engineTemperatureFromEngine = 200;
            
    var engine = Mock.Create();
    engine.Arrange(e => e.TemperatureInFahrenheit).Returns(engineTemperatureFromEngine);

    var car = new Car(engine);

    Assert.AreEqual(engineTemperatureFromEngine, car.EngineTemperature);
}

Notice that instead of instantiating an engine, I now invoke a static method on a class called Mock that takes care of creating my dynamic inheritor for me. Mock.Create() is what creates the equivalent of FakeEngine. On the next line, I invoke an (extension) method called Arrange that creates an implementation of the property for me as well. What I’m saying, in plain English, is “take this mock engine and arrange it such that the TemperatureInFahrenheit property returns 200.” I’ve done all of this in one line of code instead of adding an entire nested class. And, best of all, I don’t need to change this mock if I decide to change some behavior in the base class or add a new method.

Truly, once you get used to the concept of mocking, you’ll never go back. It will become your best friend for the purposes of mocking out dependencies of any real complexity. But temper your enthusiasm just a bit. It isn’t a good idea to use mocking frameworks for simple dependencies like the PrimeFinder example. The lite version of JustMock that I’ve used and many others won’t even allow it, and even if they did, that’s way too much ceremony — just pass in real objects and literals, if you can reasonably.

The idea of injecting dependencies into classes (what I’ve called “passive” and “semi-passive” collaboration) is critical to mocking and unit testing. All basic mocking frameworks operate on the premise that you’re using this style of collaboration and that your classes are candidates for polymorphism (either interfaces or overridable classes). You can’t mock things like primitives and you can’t mock sealed/final classes.

There are products out there called isolation frameworks that will grant you the ability to mock pretty much everything — primitives, sealed/final classes, statics/singletons, and even the new operator. These are powerful (and often long-running, resource-intensive) tools that have their place, but that place is, in my opinion, at the edges of your code base. You can use this to mock File.Open() or new SqlConnection() or some GUI component to get the code at the edge of your application under test.

But using it to test your own application logic is a path that’s fraught with danger. It’s sort of like fixing a broken leg with morphine. Passively collaborating CUTs have seams in them that allow easy configuration of behavior changes and a clear delineation of responsibilities. Actively collaborating CUTs lack these things and are thus much more brittle and difficult to separate and modify. The fact that you can come up with a scheme allowing you to test the latter doesn’t eliminate these problems — it just potentially masks them. I will say that isolating your coupled, actively collaborating code and testing it is better than not testing it, but neither one is nearly as good as factoring toward passive collaboration.

By

Seeing the Value in Absolutes

The other day, I told a developer on my team that I wouldn’t write methods with more than three parameters. I said this in a context where many people would say, “don’t write code with more than three parameters in a method,” in that I am the project architect and coding decisions are mine to make. However, I feel that the way you phrase things has a powerful impact on people, and I believe code reviews that feature orders to change items in the code are creativity-killing and soul-sucking. So, as I’ve explained to people on any number of occasions, my feedback consists neither of statements like “that’s wrong” nor statements like “take that out.” I specifically and always say, “that’s not what I would do.” I’ve found that people listen to this the overwhelming majority of the time and, when they don’t, they often have a good reason I hadn’t considered. No barking of orders necessary.

But back to what I said a few days ago. I basically stated the opinion that methods should never have more than three parameters. And right after I had stated this, I was reminded of the way I’ve seen countless conversations go in person, on help sites like Stack Overflow, and in blog comments. Does this look familiar?

John: You should never have more than three parameters in a method call.
Jane: Blanket statements like that tend to be problematic. Three method parameters is really, technically, more of a “code smell” than necessarily a problem. It’s often a problem, but it might not be.
John: I think it’s necessarily a problem. I can’t think of a situation where that’s desirable.
Jane: How about when someone is holding a gun to your head and telling you to write a method that takes four parameters.
John: (Rolls his eyes)
Jane: Look, there’s probably a better example. All I’m saying is you should never use absolutes, because you never know.
John: “You should never use absolutes” is totally an absolute! You’re a hypocrite!
Both: (Devolves into pointless bickering)

A lot of times during debates, particularly when you have smart and/or exacting participants, the conversation is derailed by a sort of “gotcha” game of one-upsmanship. It’s as though they are at an impasse as to the crux of the matter, so the two begin sniping at one another about tangentially-related or totally non-related minutiae until someone makes a slip, and this somehow settles the debate. Of course, it’s an exercise in futility because both sides think their opponent is the first to slip up. Jane thinks she’s won this argument because John used an absolute qualifier and she pointed out some (incredibly preposterous and contrived) counter-example, and John thinks he won with his ad hominem right before the end about Jane’s hypocrisy.

In this debate, they both lose, in my opinion. I agree with John’s premise but not his justification, and the difference matters. And Jane’s semantic nitpicking doesn’t get us to the right justification (counter or pro), either. Prescriptive matters of canon when it comes to programming are troubling for the same reason that absolutes are often troubling in our day-to-day lives. Even the most clear-cut seeming things, like “it’s morally reprehensible to kill people,” wind up having many loopholes in their application (“it’s morally reprehensible to kill people — unless, of course, it’s war, self-defense, certain kinds of revenge for really bad things, accidental, state-sanctioned execution, etc., etc.”). So for non-important stuff like the number of parameters to a method, aren’t we kind of hosed and thus stuck in a relativistic quagmire?

I’d argue not, and furthermore, I’d argue that the fact of the rules is more important than the rules themselves. It’s more important to have a restriction like “don’t have more than three parameters to a method” than it is to have that specific restriction. If it were “don’t have more than two method parameters” or “don’t have more than four method parameters,” we’d still be sitting pretty. Why, you ask? Well, a man named Barry Schwartz coined this phrase: “the paradox of choice: why more is less.” Restrictions limit choice, which is merciful

Developers are smart, and they want to solve problems — often hard problems. But, really, they want to solve directed problems efficiently. To understand what I mean, ask yourself which of these propositions is more appealing to you: (1) make a website that does anything in any programming language with any framework or (2) use F# to parse a large text file and have the running process use no more than 1 gig of memory. The first proposition makes your head hurt while the second gets your mental juices flowing as you decide whether to try to solve the problem algorithmically or to cheat and write interim results to disk.

Well, the same thing happens with a lot of the “best practice” rules that surround us in software development. Don’t make your classes too big. Don’t make your methods too big. Don’t have too many parameters. Don’t repeat your code. While they can seem like (and be, if you don’t understand the purpose behind them) cargo-cult mandates if you simply focus on the matter of relativism vs absolutes, they’re really about removing (generally bad) options so that you can be creative within the context remaining, as well as productive and happy. Developers who practice DRY and who write small classes with small methods and small method signatures don’t have to spend time thinking “how many parameters should this method have” or “is this class getting too long?” Maybe this sounds restrictive or draconian to you, but think of how many options have been removed from you by others: “does the code have to compile,” or “is the code allowed to wipe out our production data?” If you’re writing code for any sort of business purposes, the number of things you can’t do dwarfs the number of things you can.

Of course, just having rules for the sake of rules is the epitome of dumb cargo cult activity. The restrictions have to be ones that contribute overall to a better code base. And while there may be some debate about this, I doubt that anyone would really argue with statements like “favor small methods over large ones” and “favor simple signatures over complex ones.” Architects (or self-organizing teams) need to identify general goals like these and turn them into liberating restrictions that remove paralysis by analysis while keeping the code base clean. I’ve been of the opinion for a while now that one of the core goals of an architect should be providing a framework that prevents ‘wrong’ decisions so that the developers can focus on being creative and solving problems rather than avoiding pitfalls. I often see this described as “making sure people fall into the pit of success.”

PitOfSuccess

Going back to the “maximum of three parameters rule,” it’s important to realize that the question isn’t “is that right 99% of the time or 100% of the time?” While Jane and John argue over that one percent, developers on their team are establishing patterns and designs predicated upon methods with 20 parameters. Who cares if there’s some API somewhere that really, truly, honestly makes it better to user four parameters in that one specific case? I mean, great — you proved that on a long enough timeline, weird aberrations happen. But you’re missing out on the productivity-harnessing power of imposing good restrictions. The developers in the group might agree, or they might be skeptical. But if they care enough to be skeptical, it probably means that they care about their craft and enjoy a challenge. So when you present it to them as a challenge (in the same way speeding up runtime or reducing memory footprint is a challenge), they’ll probably warm to it.

By

Designs Don’t Emerge

I read a blog post recently from Gene Hughson that made me feel a little like ranting. It wasn’t anything he said — I really like his post. It reminded me of some discussion that had followed in my post about trying too hard to please with your code. Gene had taken a nuanced stand against the canned wisdom of “YAGNI.” I vowed in the comments to make a post about YAGNI as an aphorism, and that’s still in the works, but here is something tangentially related. Now, as then, I agree with Gene that you ignore situational nuance at your peril.

But let’s talk some seriously divisive politics and philosophy first. I’m talking about the idea of creationism/intelligent design versus evolutionary theory and natural selection. The former conceives of the life in our world as the deliberate work of an intelligent being. The latter conceives of it as an ongoing process of change governed by chance and coincidence. In the context of this debate, there is either some intelligent force guiding things or there isn’t, and the debate is often framed as one of omnipotent, centralized planning versus incremental, steady improvement via dumb process and chance. The reason I bring this up isn’t to weigh in on this or turn the blog into a political soapbox. Rather, I want to point out a dichotomy that’s ingrained in our collective conversation in the USA and perhaps beyond that (though I think that the creationist side of the debate is almost exclusively an American one). There is either some kind of central master planner, or there is simply the vagaries of chance.

I think this idea works its way into a lot of discussions that talk about “emergent design” and “big up front design,” which in the same way puts forth a pretty serious false dichotomy. This is most likely due, in no small part, to the key words “design,” “emergent” and especially “evolution” — words that frame the coding discussion. It turns into a blueprint for silly strawman arguments: “Big design” proponents scoff and say things like, “oh yeah, your architecture will just figure itself out magically” while misguided practitioners of agile methodologies (perhaps “no design” proponents) accuse their opponents of living in a coding universe lacking free will — one in which every decision, however small, must be pre-made.

But I think the word “emergent,” rather than “evolution” or “design,” is the most insidious in terms of skewing the discussion. It’s insidious because detractors are going to think that agile shops view design as something that just kind of winks into existence like some kind of friendly guardian angel, and that’s the wrong idea about agile development. But it’s also insidious because of how its practitioners view it: “Don’t worry, a good design will emerge from this work-in-progress at some point because we’re SOLID and DRY and we practice YAGNI.”

Now, I’m not going for a “both extremes are wrong and the middle is the way to go” kind of argument (absent any other reasoning, this is called middle ground fallacy). The word “emergent” itself is all wrong. Good design doesn’t ’emerge’ like a welcome ray of sunshine on a cloudy day. It comes coughing, sputtering, screaming and grunting from the mud, like a drowning man being pulled from quicksand, and the effort of dragging it laboriously out leaves you exhausted.

DragFromTheMud

The big-design-up-front (BDUF) types are wrong because of the essential fallacy that all contingencies can be accounted for. It works out alright for God in the evolution-creation debate context because of the whole omniscient thing. But, unfortunately, it turns out that omniscience and divinity are not core competencies for most software architects. The no-design-up-front (NDUF) people get it wrong because they fail to understand how messy and laborious an activity design really is. In a way, they both get it wrong for the same basic reason. To continue with the Judeo-Christian theme of this post, both of these types fail to understand that software projects are born with original sin.

They don’t start out beautifully and fall from grace, as the BDUF folks would have you believe, and they don’t start out beautifully and just continue that way, emerging as needed, as the NDUF folks would have you believe. They start out badly (after all, “non-functional” and “non-existent” aren’t words which describe great software) and have to be wrangled to acceptability through careful, intelligent and practiced maintenance. Good design is hard. But continuously knowing the next, feasible, incremental step toward a better design at absolutely any point in a piece of software’s life — that’s really hard. That takes deliberate practice, debate, foresight, adaptability, diligence, and a lot of reading and research. It doesn’t just kinda ’emerge.’

If you’re waiting on me to come to a conclusion where I give you a score from one through ten on the NDUF to BDUF scale (and it’s obviously five, right?), you’re going to be disappointed with this post. How much design should you do up front? Dude, I have no idea. Are you building a lunar rover? Probably a lot, then, because the Sea of Tranquility is a pretty unresponsive product owner. Are you cobbling together a minimum viable product and your hardware and business requirements may pivot at any moment? Well, probably not much. I can’t settle your design decisions and timing for you with acronyms or aphorisms. But what I can tell you is that to be a successful architect, you need to have a powerful grasp on how to take any design and make it slightly better this week, slightly better than that next week, and so on, ad infinitum. You have to do all of that while not catastrophically breaking things, keeping developers productive, and keeping stakeholders happy. And you don’t do that “up-front” or “ex post facto” — you do it always.