DaedTech

Stories about Software

By

What Your Job Descriptions Are Saying to Developers About Your Company

Every week I get a fairly steady stream of emails from recruiters, announcing an opportunity for some software development related job: engineer, programmer, senior developer, architect, and so on and so forth. These emails come in all shapes and sizes and, probably owing to my eclectic, polyglot background, they cover a range of different technologies and programming languages. One relatively common characteristic is that below the actual text of the email is pasted a job description. More often than not, I read this (or at least start to) out of general curiosity as to the state of the market. Some of them are reasonable and straightforward, making me think that the company in question would be a good, or at least decent, place to work. But others contain things that I’d call red flags.

Extrapolating beyond what gets sent to me via secondhand emails and generalizing to job boards and sites and other places where companies solicit talent (I can’t speak directly to these since I haven’t looked at them in quite a while), I think companies would be well served to consider what their job descriptions say to developers about them. It may not always be what’s intended. Consider some of the following characteristics of a job description and what they imply about the job.

Alphabet Soup of Technologies

“We favor and value superficial exposure and grandstanding over depth of knowledge and thoughtfulness.”

If your job description lists ten, twenty, even thirty (they get pretty ridiculous) different acronyms covering a smattering of programming languages, markup types, development methodologies, design patterns, and things that may not even actually exist, you’re sending the message that you value prospects that engage in a kind of bedpost-notching gamificiation when it comes to programming. If you stop and think about it, how would a developer acquire experience with all of those different technologies? Would you prefer he cram them all into a single project? Would you prefer that he pick or lobby for two to three new ones for each project he does and do a lot of projects? Would you prefer he never work on the same thing twice?

If the answer to any of these questions is “yes,” then you’re actively putting on the front porch light for people more concerned with playing with new toys than getting work done (like Fashionista). If the answer is “oh, well that’s not what we’re saying” then you’re asking the impossible. And you’re apparently targeting the sort of people who consider “I think I read a blog post about that once” a qualification for adding it to their resume. That leads to the hire of General Custer. So with this kind of ad, you’re targeting a group dynamic where people value breadth over depth of knowledge and are willing to exaggerate or even lie about either.

Emphasis on Years Spent on Particular Technologies

“We work harder, not smarter.”

Do you need someone that’s been banging out Java code for at least seven years? It has to be seven? Not six? If I’m at six and a half, should I follow the “always round up” method or the “round to even” method? I need to know so that I’ll know whether or not I’m qualified to program for you. I can round up? Great! Oh, wait. It says need four years of XML (whatever that means) and I only have three. I feel so inadequate. Unless… can I count non-contiguous years? I’m pretty sure that I “did XML” for a while about eight years back. Yessiree–I came into the office, day after day, and did XML. In fact, I think it was the same file, even. I did nothing but create the same XML file by hand every day for months, so that’s got to count for a lot, right?

WorkHarder

Here’s the thing. You don’t actually want people who have “done Java” or “done XML” or “done whatever” for X years. What you actually want is people who are good at Java, XML, or whatever, and you mistakenly think that a good measure (or at least indicator) of that is how many years they’ve spent doing it. The problem is that this assumes that people acquire skill at the same rate, which is demonstrably false. It also assumes that they are spending all of that time improving, which, I would argue, becomes increasingly improbable with each additional year of experience. Certainly there are people who simply like working with a technology and wouldn’t have it any other way, but if you find someone who has been banging out Java code day in and day out for a decade, it might occur to you to ask them things like, “why aren’t you a lead or manager or architect or something?” The answer might be that they love having their hands on the keyboard, but it also might be (not said, of course) “I’m just not good enough at it to achieve anything besides not being fired.”

Are those the people that you want to hire? Probably not, but are you that confident that you can cull them out in an interview when the stakes are pretty high? As an awesome saying goes, you run the risk of hiring not a Java developer with ten years experience but a Java developer with the same one year of experience ten times. If you want talent and achievement, what you should actually want is people who are good at programming and design in general. It really doesn’t even matter what language(s) the developer knows if she’s good, and it certainly doesn’t matter how many years she’s been cranking out code in it. But if you start focusing on years/tech metrics, you’re going to get candidates optimized for that–you’re going to have a department dominated by worker bees who’ve been churning out innovation-free code for decades.

Applicants will Take a Forty-Five Question Test

“However extensive your previous experience, you’ll always be junior to us.”

What you ask candidates to do resonates with them beyond the interview process and into their employment; it sets the tone for what their tenure will be like. And what do you think it says to them when you give them something reminiscent of an SAT subject test? It says, “alright young student, if you can pass this standardized aptitude test, you are deemed worthy of learning at the feet of our esteemed senior developers.” Standardized tests and the hiring process are both necessarily reductionist activities, since they involve culling a very large percentage of potential applicants down to a much smaller percentage, but there the similarities end. Standardized tests are administered by nameless teachers, graded by faceless graders, and judged by admissions officers the students will never know.

Standardized job application quizzes are administered by internet or hiring authority, graded by the same, and judged by the same. If the applicant doesn’t score an 80% or whatever is necessary to move on, then no big deal. But if they do wind up hiring on, it will be with people to whom they’ve submitted to a student-professor/grader/administrator/gatekeeper relationship. That person can always point out at any time during your tenure that you didn’t know when filling out your scantron that the C# compiler will not choke on “virtual public void MethodName.” I recognize that this could be claimed to some degree with any method of candidate evaluation, but nothing says, “you’re no different than an entry level kid” like simulating the kinds of standardized testing that most people haven’t done in years or decades.

NervousTestTaker

%Phases%SDLC%

“We’re embarrassingly waterfall and we don’t know how to fix it.”

Companies that are proud of waterfall development methodologies come right out and tout it as their way of doing things (or they call it something like “Rational Unified Process”). That’s not going to be ideal for a lot of top talent, and it might even be a deal-breaker for some of them, but at least it seems self-aware. If you swap this frank assessment for the rather meaningless term “SDLC” (short for “Software Development Lifecycle”), it says that you don’t actually want people to know your software development methodology is. Be honest–if this is on your job description, and a candidate asks whether or not your process is agile as part of the “questions from the candidate” section, how are you going to respond? My money is on hem, haw, and make a comment about how you’re “in the process of getting a little more agile but we kind of *ahem*, that is, ah, we’re kind of still somewhat, er mostly waterfall.” Why do I say that? Well your “SDLC” says, “kinda-sorta-maybe-something-other-than-waterfall,” but your “phases” says, “really, really waterfall.”

HiddenWaterfall

The reason that I say this is more of a bad sign for candidates than “waterfall and proud” is that “waterfall and proud” indicates that there is purpose and belief in that process, making it somewhat more likely that things are stable and sane. In smaller shops or places that do small fixed bid types of projects, for instance, you can probably have some success with a waterfall approach. Candidates recognize that (or else they’re simply unaware that there’s an alternative to the waterfall “procrastinate-rush” way of doing things). But if you hide your waterfall behind vague acronyms, it tells candidates that you’re aware that your process is far from ideal but you haven’t done anything about it. From this, candidates can only infer that you’re lazy/sloppy when it comes to process or else that you’re not competent when it comes to process. Neither one exactly rolls out the welcome mat for top-notch talent.

If you want to describe your development methodology and it isn’t something simple like “Scrum” or “XP,” I’d suggest asking for experience with exactly what you want: probably design, implementation, and stabilizing/sustaining. Whether your approach has “phases” or “sprints” or whatever, it’s going to have those activities, so you may as well list them by name. Rightly or wrongly, SDLC has drifted from being a way to describe software in different states of development and has come instead to describe a process that’s hard to pin down and even harder to follow. You don’t want to broadcast that, true or not.

Be Prepared to Answer Some Real Brain Teasers!

“We think we’re MENSA” or “We think we’re Google” (which thinks it’s MENSA)

Applicants get it. You want to hire clever people. You want to hire people that think outside the box. Literally. And not only that, but you want to demonstrate that you think outside the box by asking non-programming or heavily algorithm-related questions when gauging whether candidates think outside of the box. It would be so in the box to ask programmers questions about programming or to have them program. You’re looking for some kind of box-thinking-outside synergy where everyone has their secret puzzle decoder ring and the cliches flow like box wine.

MensaDude

To you, it looks selective. To most people, it just looks gratuitously self-congratulatory. If you actually are a Google or another respected tech titan and you have the rep to back it up, applicants will prostrate themselves and tolerate this silliness for a chance to punch their ticket with your name on their resume. But if you’re Acme Inc. and you specialize in making shoelaces, you’re threatening to drive away serious talent and be left with goofballs, word-game enthusiasts, and trivia experts more interested in standing behind metaphorical velvet ropes as VIPs than doing high quality work. Unless your business model is this, you’d be better served to hire people who can write code than people who know how to measure exactly four gallons using three and five gallon jugs.

So Then What To Say?

If it seems as though I’m impossible to please, I’m really not. In fact, just having these things in your job description in no way means that what I’ve said is true about your company–it just means that this is the vibe you’re giving off. But if you want to give off a better vibe, I’d suggest a simple description of what the job’s responsibilities are and what would be necessary in order to do that job successfully. There’s no number of years or soup of technologies that do that–what does it is something like this: “we’re looking for Java web developers who are familiar with IoC containers and can meet deadlines with minimal supervision.” No gimmicks, no window-dressing on potentially unfavorable realities, and no weird, degrading exams.

So then how do you go about separating the wheat from the chaff during the interview process? Without exception, the places I’ve found that seem best at doing this are the ones that dole out actual programming assignments. It’s one of those things that is apparently so blindingly obvious that most don’t think of it; if you want to determine whether someone can complete programming assignments, give them one to complete. If you try to interview developers the way you would project managers, middle management, salespeople, etc., it’s kind of crapshoot. A developer that’s very good at memorizing trivia about the compiler or reciting the benefits of object-oriented programming may not actually be able to program. A standard, two hour interview with the trivia lightning round, followed by classic tradeoff-type questions, will not uncover this nearly so well as asking the person to spend five to twenty hours coding up some sample assignment.

If that’s not practical for logistical reasons or because of hiring particulars at your institution, you might at least ask for code samples to evaluate. Or perhaps you could spend a few hours pair programming with the candidate or doing a code review of open source code. Whatever you do, though, the important thing to remember is that you want to send the message to developers that you’re looking for people who are competent and demonstrate as much when asked to do so. By all means, evaluate and select based on other important criteria–personability, communication skills, organization, etc. But what you’re looking for, at the core of it, are people who are going to be efficient and competent developers. So when you roll out the welcome mat for them in the form of a job description, consider the first impression that you’re making. Ask yourself if you think it’s one that will intrigue those efficient and competent developers.

By

The Narrative of Mediocrity

I was defeated. Interested in getting off to a good start and impressing, I had overachieved in the course by working hard and studying diligently to make a good impression. And yet, when the first essay was returned to the class, mine had a big, fat B staring back at me, smug with the kind of curves that are refreshingly absent in a nice, crisp A. I didn’t understand how this had happened, and the fact that none of the other students had received As either was cold comfort. I’d sought to impress, but the teacher had put me in my place.

I got an A in that class. I actually don’t remember which class it was any longer because it happened in a number of them. It happened in high school, college, and graduate school. I started off a B student on subjectively-graded assignments and ‘improved’ steadily for the duration of the course until I wound up with an A. Many of my peers followed the same trajectory. It was a nice story of growth and learning. It was the perfect narrative…for the teacher.

What could be better than a fresh-faced crop of students, talented but raw, eager to learn, being humbled and improving under the teacher’s tutelage? It’s the secondary education equivalent of a Norman Rockwell painting. It gives the students humility, confidence, and a work ethic, and it makes the teacher look and feel great. Everyone wins, so really, what’s the harm in the fiction? So what if it’s a bit of a fabrication? Who cares?

Well, I did. I was a relentless perfectionist as a student and this sort of evaluation drove me nuts. I sought out explanations for early B’s in classes where this happened and found no satisfaction in the explanations. I raged against the system and eventually cynically undercut it, going out of my way to perform the same caliber of work in the before and after pictures, doggedly determined to prove conspiracy. My hypothesis was confirmed by my experiments–my grades improved even as my work did not–and my triumphant proof of conspiracy was met with collective yawns and eye-rolls by anyone who actually paused long enough to listen to me.

I learned a lesson as a child and young adult about the way the academic world worked. Upon graduation from college, I was primed to learn that the business world worked that way too.

The Career Train

There are a lot of weird symmetries, quirks, and even paradoxes in the field of macroeconomics. It’s truly a strange beast. Consider, for instance, the concept of inflation, wherein everyone gets more money and money becomes worth less, but not necessarily in completely equal proportions. We’re used to thinking of money in a zero-sum kind of sense–if I give you ten dollars, then I am ten dollars poorer and you are ten dollars richer. But through the intricacies of lending and meta-transactions surrounding money, we can conceive of a scheme where we start with ten dollars and each wind up with six dollars some time later. And so it goes in life–as time goes by, we all have more money (at least in lending-based, market economics). If things get out of whack and everyone doesn’t have more money as time goes by, you have stagnation (or deflation). If things get out of whack the other way, you wind up with runaway inflation and market instability. They system works (or at least works best) when everyone gets a little more at a measured, predictable, and homogeneous pace.

The same thing seems to happen throughout our careers. We all start in the business world as complete initiates, worth only our entry level paychecks, and we all trudge along throughout our careers, gradually acquiring better salaries, titles, accessories, and office locations. Like a nice but not-too-steep interest rate, people have an expectation of dependable, steady, slight gain throughout their career. Two promotions in your twenties is pretty reasonable. Managing a team by your mid thirties. A nice office and a VP or director title in your later forties, and perhaps a C-level executive position of some kind when you’re in your fifties to sixties. On average, anyway. Some real go-getters might show their prodigious talents by moving that timeline up by five years or so, while some laggards might move it back by the same amount, topping out at some impressive but non-executive title.

Okay, so I know what you’re thinking. You want to shout “Mark Zuckerberg!” at me. Or something along those lines–some example of a disruptive entrepreneur that proves there is a different, less deterministic path. Sure, there is. People who opt out of the standard corporate narrative do so at large risk and large possible reward. Doing so means that you might be Zuckerberg or that Instagram guy, but it means that you’re a lot more likely to be working in your garage on something that goes nowhere while your friends are putting in their time in their twenties, getting to the best cubicles, offices, and corner offices a few years before you do. By not getting on the train when all your friends do, you’re going to arrive later and behind them–unless you luck out and are teleported there by the magic teleportation fairy of success.

So forget the Zuckerbergs and the people who opt out in the negative sense and never get back in. Here in corporate land, the rest of us are on a train, and there’s not a lot of variance in arrival times on trains. If you get right to the front of the train, you may get there a few minutes early, but that’s all the wiggle-room you get. The upside to this mode of transportation is that trains are comfortable, dependable, and predictable. A lot of people prefer to travel this way, and the broad sharing of cost and resources make it worth doing. It’s a sustainable, measured pace.

Everyone Meets Expectations

They don’t stop two trains on the track so that people who are fast and serious about going fast can sprint to the next train. It may be good for a few, but it would enrage the many and throw the system out of whack. That applies to trains, and it applies to your performance reviews. The train runs on time, and the only question is whether you’re in the front (exceeds expectations) or the back (meets some expectations). If you’re perennially in front, you’ll get that C-Level corner office at fifty, but perennially in back, and you’ll just be the sales manager at fifty.

Seem cynical? If so, ask yourself this: why are there no office prodigies? In school, there were those kids who skipped a grade or who took Algebra with the eighth graders while their fellow seventh graders were in Pre-Algebra. There were people who took AP classes, aced their SATs, and who achieved great, improbable things. What happens to those outliers in the corporate world, if they don’t drop out and go the Zuckerberg route? Why is there no one talented enough to rocket through the corporate ranks the way there was in school? Doesn’t that seem odd? Doesn’t it seem like, by sheer odds, there should be someone who matches Zuckerberg as a twenty-something wunderkind CEO by coming up through the corporate world rather than budging back in from entrepreneur-land? Maybe just one, like, ever?

I would think so. I would think that corporate prodigies would exist, if I didn’t know better–if I didn’t know that the mechanism of corporate advancement was a train, a system designed to quite efficiently funnel everyone toward the middle. You might exceed expectations or fail to meet them at any given performance review, but on a long enough timeline, you meet expectations because everyone meets expectations. It’s the most efficient way to create a universal and comfortable narrative for everyone. That narrative is that all of everyone’s work and achievement through life has built toward something. That the corner office is the product of forty years of loyalty, dedication, and cleverness. After forty years of meeting expectations, you, too, can finally arrive.

This isn’t some kind of crazy conspiracy theory. This is transparently enforced via HR matrices. All across the nation and even the world, there are corporate policies in place saying that level six employees can’t receive two promotions before level seven employees receive one. It wouldn’t be fair to pay Suzy more than Steve since Steve has three more years of industry experience. Organizations, via a never-ending collection of superficially unrelated policies, rules, regulations, and laws, take a marathon and put it on a single-file people-mover.

scan0009

Wither the Performance Review

So if I had a parallel experience with a manufactured narrative in school and the corporate world, how to explain grade-skippers and AP-takers? Simple. In school, the narrative occurs for the benefit of the teacher on the micro (single quarter or semester) level. In the work world, it occurs for everyone’s benefit for the rest of your working life.

So why do organizations bother with the awkward performance review construct? Well, in part because it’s necessary to make justifications about issues like pay, position, and promotions. If people receive titular, “career-advancing” promotions every three to four years, a review is necessary in the first year to tell them that they need to “get better at business” or something. Then in the second year, they can hear that they’re making “good strides at business,” followed in the third year by a hearty congratulations for “being great at business,” and, “really earning that promotion to worker IV.” Like a scout earning a merit-badge, this manufactured narrative will be valued by the ‘earner’ because it supplies purpose to the past three years, even if the person being reviewed didn’t “get better at business” (whatever that means). But the other purpose is providing the narrative for the reviewers. If the reviewers’ reports started out “bad at business” and ‘improved’ under his tutelage, his own review narrative goes a lot better, and so on, recursively, up the chain. What a wonderful world where everyone is helping everyone get better at a very measured pace, steadily, over the course of everyone’s career.

But just as I railed against this concept in school, so do I now. I’ve never received sub-standard reviews. In general annual review parlance, mine have typically been “exceeds expectations but…” where “but” is some reason that I’m ‘not quite ready’ for a promotion or more responsibility just yet. Inevitably, this magically fixes itself.

So what if we did as Michael O. Church suggests and simply eliminate the performance reviews along these lines? Poof. Gone. I don’t know about you, but I might just find a “we’re not promoting you because that’s our policy” refreshingly honest as compared to a manufactured and non-actionably vague piece of ‘constructive’ criticism. (This is not to be confused with a piece of feedback like “your code should be more modular,” or, “you should deliver features more quickly,” both of which are specific, actionable, and perfectly reasonable critiques. But also don’t require some kind of silly annual ceremony where I find out if I’m voted onto Promotion Island or if I’ll have to play again next year.) I certainly don’t have an MBA, and I’m not an expert in organizational structuring and management, but it just seems to me as though we can do better than a stifling policy of funneling everyone toward the middle and manufacturing nonexistent deficiencies so that we can respond by manufacturing empty victories. I can only speak for myself, but you can keep the guaranteed trappings of ascending the corporate ladder if you just let me write my own story in which my reach exceeds my grasp.

By

Guerilla Guide to Developer Interviews

Over the course of my career I’ve done quite a number of technical interviews, and a pretty decent cross-section of them have ended in job offers or at least invitations to move on to the next step. That said, I am no expert and I am certainly no career coach, but I have developed some habits that seem pretty valuable for me in terms of approaching the interview process. Another important caveat here is that these are not tips to snag yourself an offer, but tips to ensure that you wind up at a company that’s as good a fit as possible. Sometimes that means declining an offer or even not getting one because you realize as you’re interviewing that it won’t be a good fit. On any of these, your mileage may vary.

So in no particular order, here are some things that you might find helpful if you’re throwing yourself out there on the market.

Avoid the Firehose

Programming jobs are becoming more and more plentiful, and, in response to that demand, and contrary to all conventional logic about markets, the supply of programmers is falling. If you work as a programmer, the several emails a week you get from recruiters stand in not-so-mute testimony to that fact. If you decide that it’s time to start looking and throw your resume up on Dice, Monster, and CareerBuilder, your voicemail will fill up, your home answering machine will stop working, and your email provider will probably throttle you or start sending everything to SPAM. You will be absolutely buried in attempts to contact you. Some of them will be for intern software tester; some of them will be for inside sales rep; some of them will be for super business opportunities with Amway; some of them won’t even be in your native language.

DrinkFirehose

Once you do filter out the ones (dozens) that are complete non-starters, you’ll be left with the companies that have those sites on some kind of RSS or other digital speed dial, meaning that they do a lot of hiring. Now, there are some decent reasons that companies may do a lot of hiring, but there are a lot of not-so-decent reasons, such as high turnover, reckless growth, a breadth-over-depth approach to initial selection, etc. To put it in more relatable terms, imagine if you posted a profile on some dating site and within seconds of you posting it, someone was really excited to meet you. It may be Providence, but it also may be a bit worrisome.

The long and short of my advice here is that you shouldn’t post your resume immediately to sites like these. Flex your networking muscle a bit, apply to some appealing local companies that you’d like to work at, contact a handful of recruiters that you trust, and see what percolates. You can always hit the big boards later if no fish are biting or you start blowing through your savings, but if you’re in a position to be selective, I’d favor depth over breadth, so to speak.

Don’t Be Fake

When it comes time to the do the actual interview, don’t adopt some kind of persona that you think the interviewers want to see. Be yourself. You’re looking to see whether this is going to be a fit or not, and while it makes sense to put your best foot forward, don’t put someone else’s best foot forward. If you’re a quiet, introverted thinker, don’t do your best brogrammer imitation because there’s a ping pong table in the other room and the interviewers are all 20-something males. You’re probably going to fail to fit in anyway, and even if you don’t, the cultural gulf is going to continue to exist once you start.

And above all, remember that “I don’t know” is the correct answer for questions to which you don’t know the answer. Don’t lie or try to fake it. The most likely outcome is that you look absurd and tank the interview when you could have saved yourself a bit of dignity with a simple, “I’m not familiar with that.” But even if this ruse somehow works, what’s the long-play here? Do you celebrate the snow-job you just pulled on the interviewer, even knowing that he must be an idiot (or an Expert Beginner) to have fallen for your shtick? Working for an organization that asks idiots to conduct interviews probably won’t be fun. Or perhaps the interviewer is perfectly competent and you just lucked out with a wild guess. In that case, do you want to hire on at a job where they think you’re able to handle work that you can’t? Think that’ll go well and you’ll make a good impression?

If you don’t know the answers to questions that they consider important, there’s a pretty decent chance you’d be setting yourself up for an unhappy stay even if you got the job. Be honest, be forthright, and answer to the best of your ability. If you feel confident enough to do so, you can always pivot slightly and, for instance, turn a question about the innards of a relational database to an answer about the importance of having a good DBA to help you while you’re doing your development work or something. But whatever you do, don’t fake it, guess, and pray.

Have the Right Attitude

One of the things I find personally unfortunate about the interview process is how it uniquely transports you back to waiting to hear whether or not you got into the college of your dreams. Were your SAT scores high enough? Did you play a varsity sport or join enough clubs? Did you have enough people edit your essays? Oh-gosh-oh-gee I hope they like me. Or, really, I hope I’m good enough.

Let me end the suspense for you. You are. The interview process isn’t about whether you’re good enough, no matter how many multiple choice questions you’re told to fill out or how much trivia an interviewer sends your way in rapid fire bursts of “would this compile!?” The interview process is ultimately about whether you and the company would be a good mutual fit. It isn’t just a process to help them determine if you’d be able to handle the work that they do. It’s also a chance for you to evaluate whether or not you’d like doing the work that they give you. Both parts are equally important.

So don’t look at it as you trying to prove yourself somehow. It’s more like going to a social event in an attempt to make friends than it is like hoping you’re ‘good’ enough for your favorite college. Do you want to hang out with the people you’re talking to for the next several years of your life? Do you have similar ideas to them as to what good software development entails? Do you think you’d enjoy the work? Do you like, respect, and understand the technologies they use? This attitude will give you more confidence (which will make you interview better), but it also sets the stage for the next point here.

Don’t Waste Your Questions

In nearly every interview that I’ve ever been a part of, there’s the time for the interviewer to assess your suitability as a candidate via asking you questions. Then there’s the “what questions do you have for me” section. Some people will say, “nothing — I’m good.” Those people, as any career site or recruiter will tell you, probably won’t get an offer. Others will take what I believe is fairly standard advice and use this time as an opportunity to showcase their good-question-asking ability or general sharpness. Maybe you ask impressive sounding things like, “what’s your five year plan,” or, “I have a passionate commitment to quality as I’m sure you do, so how do you express that?” (the “sharp question” and “question brag,” respectively).

I think it’s best to avoid either of those. You can really only ask a handful of questions before things start getting awkward or the interviewer has to go, so you need to make them count. And you’ll make them count most by asking things that you really want to know the answer to. Are you an ardent believer in TDD or agile methodologies? Ask about that! Don’t avoid it because you want it to be true and you want them to make an offer and you don’t want to offend them. Better to know now that you have fundamental disagreements with them than six months into the job when you’re miserable.

As an added bonus, your interviewer is likely to be a pretty successful, intelligent person. She’s probably got a fairly decent BS detector and would rather you ask questions to which you genuinely want to know the answers.

Forge your Questions in the Fires of Experience

So you’re going to ask real questions, but which questions to ask… My previous suggestion of “ones you want the answer to” is important, but it’s not very specific. The TDD/agile question previously mentioned is an example of one good kind of question to ask: a question which provokes an answer that interests you and gives you information about whether you’d like the job. But I’d take it further than this.

Make yourself a list of things you liked and didn’t like at previous jobs, and then start writing down questions that will help you ferret out whether the things you liked or didn’t will be true at the company where you’re interviewing. Did you like way your last company provided you with detailed code reviews because it helped you learn? Ask what kinds of policies and programs they have in place to keep developers current and sharp. Did you not like the mess of interconnected dependencies bogging down the architecture of the code at your last stop? Ask them what they think of Singleton as a design pattern. (I kid, but only kind of.)

You can use this line of thinking to get answers to tough-to-ask questions as well. For instance, you’re not going to saunter into an interview and say, “So, how long before I can push my hours to second shift and stroll in at 2 PM?” But knowing things about a company like dress code, availability of flex hours, work-from-home policy, etc. is pretty valuable. Strategize about a way to ask about these things without asking–even during casual conversation. If you say something like “rush hour on route 123 out there seems pretty bad, how do people usually avoid it,” the next thing you hear will probably be about their flex hours policy, if the company has one.

Negative Bad, Zero-Sum Fine

Another piece of iconic advice that you hear is “don’t talk badly about your former/current employer.” I think that’s great advice to be on the safe side. I mean, if I’m interviewing you, I don’t want to hear how all of your former bosses have been idiots who don’t appreciate your special genius, nor do I want to hear juicy gossip about the people at your office. Staying upbeat makes a good impression.

That said, there is a more nuanced route you can travel if you so choose, that I think makes you a pretty strong candidate. If I’m interviewing you, I also know that your former positions aren’t all smiles and sunshine or you wouldn’t be sitting in front of me. When talking about past experience, you can go negative, but first go positive to cancel it out.

My current employer has some really great training programs, and I’ve enjoyed working with every project manager that I’ve been paired with. That’s contributed to me enjoying the culture–and feeling a sense of camaraderie, too. Of course, there were some things I might have done differently in our main code base, from an architectural perspective. I’d have liked to see a more testable approach and an IoC container, perhaps, but I realize that some things take time to change, especially in a legacy code base.

Now you’ve communicated that you recognize that the architectural approach to your code base was sub-optimal, but that you maintain a positive attitude in spite of that. Instead of the interviewer hearing, “man, those guys over there are procedural-code-writing cretins,” he hears, “some things were less than ideal, and I’d like them to improve, but I grow where I’m planted.”

Gather your Thoughts

After you’re done, stop and write down what you thought. I mean it. Walk out of the building, and in your car or on a nearby bench, plop down and write your impressions while they’re fresh in your mind. What did you like, what worries you, what questions should you follow up with, what specifics can you cite? Things will be fuzzy later, and this information is solid gold now.

Your brain is going to play weird tricks on you as time goes by and you’re considering an offer or the next round of interviews. Something that struck you as a red flag might be smoothed over in your mind as you grow increasingly tired of your job hunt. I know they said that they’re as waterfall as Niagra and proud, but I think the tone of voice and non-verbal cues might have indicated a willingness to go agile. You’ll fool yourself. You’ll talk yourself into things. That is, unless you write them down and bring them up as concerns the next time you talk with the company or a representative thereof.

Maintain Perspective

Interviewing is an inherently reductionist activity, both for you and for the company. Imagine if marriage worked like job interviews. The proposition would be put to you and your potential mates this way:

Alright, so you have have about two or three cracks at this whole marriage thing before you’re too old for it, so take your time and make a good decision and all that, but do it really fast. You’re going to meet for lunch, a little Q&A, and then you’ll have just enough time to send a thank-you note before you hear thumbs up or thumbs down from your date. If it’s thumbs up, you have a few days to decide if the prenuptial agreement looks good, if you have similar opinions on when to have children and how many, yadda-yadda, and hurry up, and, “do you take this person to be your lawfully wedded, blah, blah, you may now kiss, etc., whatever, done.

Think a few important details might get missed in that exchange? Think you might be left after an inexplicable rejection, stammering, “b-b-but I know how to cook and I really have a lot to offer… why… I just don’t get it.” It’s pretty likely. There are going to be a lot of bad decisions and the divorce rate will be pretty high.

Back to the interview process, just remember to keep your chin up. You might have interviewed for a job that had already been filled except for the detail of technically having to interview a second person. Maybe the CEO’s son got the job instead of you. Maybe you wore a gray suit and the man interviewing you hates the color gray with a burning passion. Maybe you had a lapse when talking about your WPF skills and said WCF, and someone thinks that makes you a moron. The list goes on, and it often makes no sense. It makes no sense in the way that you’ll look at a company’s website and see a weirdly blinking graphic and think it looks unprofessional and decide not to apply there. You make snap judgments, and so do they. It’s the name of the game. Don’t take it personally.

By

How to Keep Method Size Under Control

Do you ever open a source code file and see a method that starts at the top of your screen and kind of oozes its way to the bottom with no end in sight? When you find yourself in that situation, imagine that you’re reading a ticker tape and try to guess at where the method actually ends. Is it a foot below the monitor? Three feet? Does it plummet through the floor and into the basement, perhaps down past the water table and into the earth’s mantle?

TickerMonitor

Visualized like this, I think everyone might agree that there’s some point at which the drop is too far, though there’s likely some disagreement on where exactly this is. Personally, I used to subscribe to the “fits on a screen” heuristic and would only start looking to pull out methods if it got beyond that. But in more recent years, I think even smaller. How small? I dunno–five or six lines, max. Small enough that you’ll only ever see one try-catch or control flow statement in there. Yeah, seriously, that small. If you’re thinking it sounds kind of crazy, I get that, but give it a try for a while. I can almost guarantee that you’ll lose your patience for looking at methods that cause you to think, “wait, where was loopCounter declared again–before the second or third while loop?”

If you accept the premise that this is a good way to do things or that it might at least be worth a try, the first thing you’ll probably wonder is how to go about doing this from a practical standpoint. I’ve definitely encountered people and even whole groups who considered method sizes like this to be impractical. The first thing you have to do is let go of the notion that classes are in some kind of limited supply and you have to be careful not to use too many. Same with modules, if your project gets big enough. The reason I say this is that having small methods means that you’re going to have a lot of them. This in turn means that they’re going to need to be spread to multiple classes, and those classes will occupy more namespaces and modules. But that’s okay. If you encounter a large application that’s well designed and factored, it’s that way because the application is actually a series of small, focused components working together. Monolithic doesn’t scale well.

Getting Down to Business

If you’ve prepared yourself for the reality of needing more classes organized into more namespaces and modules, you’ve really overcome the biggest obstacle to being a small-method coder. Now it’s just a question of mechanics and practice. And this is actually important–it’s not sufficient to just say, “I’m going to write a lot of methods by stopping at the fifth line, no matter what.” I guarantee you that this is going to create a lot of weird cross-coupling, unnecessary state, and ugly things like out parameters. Nobody wants that. So it’s time to look to the art of creating abstractions.

As a brief digression, I’ve recently picked up a copy of Uncle Bob Martin’s Clean Code: A Handbook of Agile Software Craftsmanship and been tearing my way through it pretty quickly. I’d already seen most of the Clean Coder video series, which covers some similar ground, but the book is both a good review and a source of new and different information. To be blunt, if you’re ever going to invest thirty or forty bucks in getting better at your craft, this is the thing to buy. It’s opinionated, sometimes controversial, incredibly specific, and absolute mandatory reading. It will change your outlook on writing code and make you better at what you do, even if you don’t agree with every single point in it (though I don’t find much with which to take issue, personally).

The reason I mention this book and series is that there is an entire section in the book about functions/methods, and two of its fundamental points are that (1) functions should do one thing and one thing only, and (2) that functions should have one level of abstraction. To keep those methods under control, this is a great place to start. I’d like to dive a little deeper, however, because “do one thing” and “one level of abstraction per function” are general instructions. It may seem a bit like hand-waving without examples and more concrete heuristics.

Extract Finer-Grained Details

What Uncle Bob is saying about mixed abstractions can be demonstrated in this code snippet:

public void OpenTheDoor()
{
    GrabTheDoorKnob();
    TwistTheDoorKnob();
    TightenYourBiceps();
    BendYourElbow();
    KeepYourForearmStraight();
}

Do you see what the issue is? We have a method here that describes (via sub-methods that are not pictured) how to open a door. The first two calls talk in terms of actions between you and the door, but the next three calls suddenly dive into the specifics of how to pull the door open in terms of actions taken by your muscles, joints, tendons, etc. These are two different layers of abstractions: one about a person interacting with his or her surroundings and the other detailing the mechanics of body movement. To make it consistent, we could get more detailed in the first two actions in terms of extending arms and tightening fingers. But we’re trying to keep methods small and focused, so what we really want is to do this:

public void OpenTheDoor()
{
    GrabTheDoorKnob();
    TwistTheDoorKnob();
    PullOpenTheDoor();
}

private static void PullOpenTheDoor()
{
    TightenYourBiceps();
    BendYourElbow();
    KeepYourForeArmStraight();
}

Create Coarser Grained Categories

What about a different problem? Let’s say that you have a method that’s long, but it isn’t because you are mixing abstraction levels:

public void CookQuesadilla()
{
    ChopOninons();
    ShredCheese();

    GetOutThePan();
    AddOilToPan();
    TurnOnTheStove();

    SprinkleOnionsAndCheeseOnTortilla();
    PutTortillaInPan();
    CookUntilFirm();
    FoldTortillaAndCookUntilBrown();
    FlipTortillaAndCookUntilBrown();
    RemoveCookedQuesadilla();

    RemovePanFromStove();
    ScrubPanWithBrush();
    ServeQuesadillas();
}

These items are all at the same level of abstraction, but there are an awful lot of them. In the previous example, we were able to tighten up the method by making the abstraction levels consistent, but here we’re going to actually need to add a layer of abstraction. This winds up looking a little better:

public void CookQuesadilla()
{
    PrepareIngredients();
    PrepareEquipment();
    PerformActualCooking();
    FinishUp();
}

private static void PrepareIngredients()
{
    ChopOninons();
    ShredCheese();
}
private static void PrepareEquipment()
{
    GetOutThePan();
    AddOilToPan();
    TurnOnTheStove();
}
private static void PerformActualCooking()
{
    SprinkleOnionsAndCheeseOnTortilla();
    PutTortillaInPan();
    CookUntilFirm();
    FoldTortillaAndCookUntilBrown();
    FlipTortillaAndCookUntilBrown();
    RemoveCookedQuesadilla();
}
private static void FinishUp()
{
    RemovePanFromStove();
    ScrubPanWithBrush();
    ServeQuesadillas();
}

In essence, we’ve created categories and put the actions from the long method into them. What we’ve really done here is create (or add to) a tree-like structure of methods. The public method is the root, and it had thirteen children. We gave it instead four children, and each of those children has between two and five children of its own. To tighten up methods, it’s perfectly viable to add “nodes” to the “tree” of your call stack. While “do one thing” is still a little elusive, this seems to be carrying us in that direction. There’s no individual method that you look at and think, “boy, that’s a lot of stuff going on.” Certainly its a matter of some art and taste, but this is probably a good way to think of it–organize stuff into hierarchical method categories until you look at each method and think, “I could probably memorize what that does if I needed to.”

Recognize that Control Flow Uses Up an Abstraction

So far we’ve been conceptually figuring out how to organize families of methods into well-balanced tree structures, and that’s taken us through some pretty mundane code. This code has involved none of the usual stuff that sends apps careening off the rails into bug land, such as conditionals, loops, assignment, etc. Let’s correct that. Looking at the code above, think of how you’d modify this to allow for the preparation of an arbitrary number of quesadillas. Would it be this?

public void CookQuesadillas(int numberOfQuesadillas)
{
    PrepareIngredients();
    PrepareEquipment();
    for(int i = 0; i < numberOfQuesadillas; i++)
        PerformActualCooking();
    FinishUp();
}

Well, that makes sense, right? Just like the last version, this is something you could read conversationally while in the kitchen just as easily as you do in the code. Prep your ingredients, then prep your equipment, then for some integer index equal to zero and less than the number of quesadillas you want to cook, increment the integer by one. Each time you do that, cook the quesadilla. Oh, wait. I think we just went careening into the nerdiest kitchen narrative ever. If Gordon Ramsey were in charge, he'd have strangled you with your apron for that. Hmm... how 'bout this?

public void CookQuesadillas(int numberOfQuesadillas)
{
    PrepareIngredients();
    PrepareEquipment();
    PerformActualCooking(numberOfQuesadillas);
    FinishUp();
}

private static void PerformActualCooking(int numberOfQuesadillas)
{
    for (int index = 0; index < numberOfQuesadillas; index++)
    {
        SprinkleOnionsAndCheeseOnTortilla();
        PutTortillaInPan();
        CookUntilFirm();
        FoldTortillaAndCookUntilBrown();
        FlipTortillaAndCookUntilBrown();
        RemoveCookedQuesadilla();
    }
}

Well, I'd say that the CookQuesadillas method looks a lot better, but do we like "PerformActualCooking?" The whole situation is an improvement, but I'm not a huge fan, personally. I'm still mixing control flow with a series of domain concepts. PerformActualCooking is still both a story about for-loops and about cooking. Let's try something else:

public void CookQuesadillas(int numberOfQuesadillas)
{
    PrepareIngredients();
    PrepareEquipment();
    PerformActualCooking(numberOfQuesadillas);
    FinishUp();
}

private static void PerformActualCooking(int numberOfQuesadillas)
{
    for (int index = 0; index < numberOfQuesadillas; index++)
        CookAQuesadilla();
}

private static void CookAQuesadilla()
{
    SprinkleOnionsAndCheeseOnTortilla();
    PutTortillaInPan();
    CookUntilFirm();
    FoldTortillaAndCookUntilBrown();
    FlipTortillaAndCookUntilBrown();
    RemoveCookedQuesadilla();
}

We've added a node to the tree that some might say is one too many, but I disagree. What I like is the fact that we have two methods that contain nothing but abstractions about the domain knowledge of cooking and we have a bridging method that brings in the detailed realities of the programming language. We're isolating things like looping, counting, conditionals, etc. from the actual problem solving and story telling that we want to do here. So when you have a method that does a few things and you think about adding some kind of control flow to it, remember that you're introducing a detail to the method that is at a lower level of abstraction and should probably have its own node in the tree.

Adrift in a Sea of Tiny Methods

If you're looking at this cooking example, it probably strikes you that there are no fewer than eighteen methods in this class, not counting any additional sub-methods or elided properties (which are really just methods in C# anyway). That's a lot for a class, and you may think that I'm encouraging you to write classes with dozens of methods. That isn't the case. So far what we've done is started to create trees of many small methods with a public method and then a ton of private methods, which is a code smell called "Iceberg Class." What's the cure for iceberg classes? Extracting classes from them. Maybe you turn the first two methods that prepare ingredients and equipment into a "Preparer" class with two public methods, "PrepareIngredients" and "PrepareEquipment." Or maybe you extract a quesadilla cooking class.

It's really going to vary based on your situation, but the point is that you take this opportunity pick nodes in your growing tree of methods and sub-methods and convert them into roots by turning them into classes. And if doing this leads you to having what seems to be too many classes in your namespace? Create more namespaces. Too many of those in a module? Create more modules. Too many modules/projects in a solution? More solutions.

Here's the thing: the complexity exists no matter how many or few methods/classes/namespaces/modules/solutions you have. Slamming them all into monolithic constructs together doesn't eliminate or even hide that complexity, though many seem to take the ostrich approach and pretend that it does. Your code isn't somehow 'simpler' because you have one solution with one project that has ten classes, each with 300 methods of 7,000 lines. Sure, things look simple when you fire up the IDE, but they sure won't be simple when you try to debug. In fact, they'll be much more complicated because your functionality will be hopelessly interwoven with weird temporal couplings, ad-hoc references, and hidden dependencies.

If you create large trees of functionality, you have the luxury of making the structure of the tree the representative of the application's complexity, with each node an island of simplicity. It is in these node-methods that the business logic takes place and that getting things right is most important. And by managing your abstractions, you keep these nodes easy to reason about. If you structure the tree correctly and follow good OOP design and practice, you'll find that even the structure of the tree is not especially complicated since each node provides a good representative abstraction for its sub-tree.

Having small, readable, self-documenting methods is no pipe dream. Really, with a bit of practice, it's not even very hard. It just requires you to see code a little bit differently. See it as a series of hierarchical stories and abstractions rather than as a bunch of loops, counters, pointers, and control flow statements, and the people that maintain what you write, including yourself, will thank you for it.

By

Language Basics from Unit Tests

Let’s say that in a green field code base someone puts together a type that conceptually is a collection of non-integer values. For the sake of discussion, let’s call it a graph. A graph object might store a series of two-element tuples or perhaps a series of some value type like “point.” The graph might then perform operations on this data, such as IncreaseX() or IncreaseY() or Invert() or Divide()–operations that iterate through the points and do things to them. The actual mechanics of this don’t matter a whole lot. It’s the concept that’s important.

Now let’s say that in the graph the internal representation of the points is a floating point data type such as, well, float. I’m going to save the nuance of floating point arithmetic for a future practical math post, but suffice it say that floats can exhibit some weird-seeming behavior when it comes to comparisons, truncation/rounding, certain kinds of casting and type representations, etc.

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Mind_Equals_Blown()
{
    float x = 0.2f;
    float y = 0.1f;
    float z = x + y;

    Assert.IsTrue(z == x + y);  //What the - why does this fail?!?
}

And let’s also say that the person responsible for authoring this graph class hasn’t read a practical math post about floating point arithmetic and is completely oblivious to these potential pitfalls.

And, finally, let’s say that this graph class becomes a mainstay of the business logic in a particular application. It’s modified, extended, and relied heavily upon without a whole lot of attention paid to its internal workings. At least until stuff mysteriously doesn’t work. But when that happens, the culprit isn’t immediately obvious, so strange work-arounds and cargo-cult, oddball solutions spring up when symptoms occur. Extension methods are written, and sometimes entirely different modules are added to the code base because the existing one is “tricky” or “not to be trusted.”

At the application level, this causes maintenance issues, a lot of heated and fruitless arguments, and voodoo approaches to code. From a user interface perspective, this causes quirky behavior. Occasionally a linear graph is completely displaced out of the graph and rendered on some menu somewhere, or the screen goes blank for a few seconds and then the display is restored. Defects and defect reports are created and developers dispatched to track down the issue, but after a few days of fruitless efforts, some project manager quietly sets the defect’s priority from “critical” to “cosmetic” and the software is shipped. It’s embarrassing, but whatcha gonna do. Ya know, computers have a mind of their own sometimes!

MessedUpGraph

Catching it Early

What if, instead of doing things the old-fashioned but all-too-common way, the authors of this code had been writing unit tests and/or practicing TDD? Well, there’s a very good chance that the issue stemming from the graph library is caught immediately as its API methods are being fleshed out from a functionality perspective. There’s a good chance that someone is writing a test and gets to the point that we were at in the code sample above, where they are utterly dumbfounded as to why 1+1 does not equal 2 in float land.

And then, good things happen. The developer in question takes to google or stack overflow, or perhaps he talks to other, more experienced developers on his team. He then gets an explanation, learns something about the language, and leaves the code in a correct state. Contrast this with the non-tested approach of “code it up, build a bad house on the bad foundation, and then ship the result because it’s too late.”

And what if the TDD/unit tests don’t expose this issue? Well, what they’ll do in either case is decouple the code base. So when the issue eventually does crop up via weird GUI behavior, it will be much easier to isolate. When it’s isolated, it will be much easier for the unit-test-savvy developers to write a test that exposes the defect to learn the lesson and fix the issue. It’s still a win.

The point about unit tests helping catch errors and leading to a more decoupled design is hardly controversial. But the benefits go beyond that. Unit tests provide a fast feedback loop for all points in the code base, which lends itself very well to poking and prodding things and experimenting. And that, in turn, leads to better understanding of not only the code, but also the language. If you can execute and get feedback on code extremely quickly, you’re much more likely to ask questions like, “I wonder what happens if I do x…” and then to do it and see. And that sort of experimentation, much like immersion in natural language, leads much more quickly to fluency.