DaedTech

Stories about Software

By

Coding Conventions and Ralph Waldo Emerson

Convention or Improvement

I was reading (and commenting on) a blog post by Jimmy Bogard at Los Techies yesterday. The basic premise was that, since Systems Hungarian Notation has basically been rejected by the developer community at large, it is interesting that we continue to pre-pend an “I” to interfaces in C#.

The post and some of the comments touched on the idea that while, yes, this is incongruous with eschewing systems Hungarian, we should continue to do it because it’s what other programmers expect and not following the convention makes your code less intuitive and readable to other C# developers. Jimmy says:

The train for picking different naming styles for interfaces and generic type parameter names left the station a long, long time ago. That ship has sailed. Picking a different naming convention that goes against years and years of prior art…

In the interest of full disclosure, I follow the naming convention and understand the point about readability, but it led me to consider a broader, more common theme that I’ve encountered in my programming career and life in general. At what point do we draw the line between doing something by convention because there is a benefit and mindlessly doing something by convention.

Emerson

Ralph Waldo Emerson famously said, “A foolish consistency is the hobgoblin of little minds.” This is often misquoted as “Consistency is the hobgoblin of little minds,” which, in my opinion, frames my question here well. The accurate quote that includes “foolish” seems to suggest that following convention makes sense, so long as one continually justifies doing so, whereas the latter misses the subtlety and seems to advocate iconoclasm for its own sake, which, one might argue, is a foolish consistency in and of itself. We all know people that mindlessly hate everything that’s popular.

Lest I venture too far into philosophical considerations and away from the technical theme of this blog, I’ll offer this up in terms of the original subject for debate. Is the fact that a lot of programmers do something a valid reason to do it yourself, or is that justification an all purpose cudgel for beating people into following the status quo for its own sake? Is the convention argument a valid one of its own right?

Specifics

This is not the first incarnation in which I’ve seen this argument about convention. I’ve been asked to name method variables “foo” instead of “myFoo” because that’s a convention in a code base, and other people don’t prepend their variables with “my”. Ostensibly, it makes the code more readable if everyone follows the same naming scheme. This is one example in the case of countless examples I can think of where I’ve been asked to conform to a naming scheme, but it’s an interesting one. I have a convention of my own where class fields are pre-pended with underscores, method parameters are camel case, and method variables are pre-pended with “my”.

This results in code that looks as follows:

internal class Customer
{
    private string _ssn;
    internal string Ssn { get { return _ssn; } }
    
    internal bool Equals(Customer customer)
    {
        var myOtherSocial = customer.Ssn;
        return _ssn == myOtherSocial;
    }
}

There is a method to my madness, vis-a-vis readability. When I look at code that I write, I can tell instantly whether a given variable is a class field, method parameter, or method variable. To me, this is clear and readable. What I was asked to do was rename “myOtherSocial” to “otherSocial” thus, in my frame of reference, losing the at-a-glance information as to what is a method parameter and what is a local variable. This promotes overall readability for the many while reducing readability for the few (i.e. me).

That’s an interesting tradeoff. One could easily make the Vulcan argument that better readability needs of the many outweigh the needs of the few. But, one could also make the argument that the many should adapt to my convention, since it provides more information. Taking at face value that I’m right about my way being better (only for argument’s sake – I’m not arguing that there is a ‘right’ in a matter of personal readability and aesthetics) is the idea of convention — that a lot of people do it the other way — a valid argument against improvement?

Knowing When Conventions Can be Ignored

I don’t presume to know what the best naming scheme for local variable or interfaces is. In fact, I’d argue that it’s likely a matter of aesthetics and expectations. People who have a harder time mentally switching contexts are going to be more likely to dig in, and mile-a-minute, flighty thinkers will be more likely to get annoyed at such ‘pettiness’ and ‘foolish consistency’. Who’s right? Who knows – who cares.

Where the rubber meets the road is how consistency or lack thereof affects a development team as a whole. And, therein lies the subtlety of what to do, in my mind. If I am tasked with developing some library with a small, public facing API for my part in a team, it makes the most sense for me to write code in a style that makes me most productive. The fact that my code might be less readable to others will only be relevant at code reviews and in the API itself. The former is a one and done concern and the latter can be addressed by establishing “public” conventions to which I would then adhere. Worrying about some hypothetical maintenance programmer when deciding what to name and how to case my field variables is fruitless to a degree because that hypothetical person’s readability preferences are strictly unknowable.

On the other hand, if there is a much more open, collaborative paradigm where people are constantly modifying one another’s code, then establishing conventions probably makes a lot of sense, and following the conventions for their own sake would as well. The conventions might even suck, but they’ll be consistent, and probably save time. That’s because the larger issue with the consistency/convention argument is facilitating efficiency in communication — not finding The One Try Way To Do Things.

So, the “arbitrary convention” argument picks up a lot of steam as collaboration becomes more frequent and granular. I think in these situations, the “foolish consistency” becomes an asset to practical developers rather than a “hobgoblin of little minds.”

Be an Adapter Instead of an Evangelizer

All that said, the conventions and arguments still annoy me on some level. I can recognize the holistic wisdom in the approach while still being aggravated by being asked to do something because a bunch of other people are also doing it. It evokes images in my head of being forced to read “Us Weekly” because a lot of other people care how many times Matt Damon went to the gym this week. Or, more on subject, it evokes images of being asked to prepend “str” to all my string variables because other people think that (redundant in Visual Studio) information makes things more readable.

But, sometimes these things are unavoidable. One can go with it, or one can figure out a way to make all parties happy. I blogged previously about my solution to my own brushes with others’ conventions. My solution was a DXCore plugin that translated my conventions to a group’s, and vice-versa. I’m happy, the group is happy, and I learned stuff in the process.

So, I think the most important thing is to be an adapter. Read lots of code, and less will throw you off your game and cause you to clamor for conventions. Figure out ways to satisfy all parties, even if it means work-arounds or interesting solutions. Those give your brain some exercise anyway. Be open to both the convention argument and the argument for a new way of doing things. Go with the flow. The evangelizers will always be there, red faced, arguing that camel case is stupid and only pascal case makes sense. That’s inevitable. It’s not inevitable that you be one them.

(Image compliments of: http://geopolicraticus.wordpress.com/2009/01/28/short-term-thinking/ )

By

More and More With TDD

TDD Revisited Again

Previously I blogged about TDD with some reservations and then later as I re-introduced myself to the pure practice as I had learned it some many moons ago. For the last month, I have been plowing ahead methodically with this experiment, and I thought I’d revisit some of my thoughts from earlier.

Reinforced Pros

First off, the pros still stand in my mind, but I’d like to add one enormous additional pro, which is that you get code right the first time. I mean, really, really right. Right as in coding for a few days without ever running the application you’re working on, and then firing it up and having it work flawlessly the first time you run it.

And, I’m not talking about plodding along with some tiny development exercise like scoring a bowling game. I mean, I recently wrote an application that would parse a text file, format and filter records, and insert the result in a database. Using a combination of Moq for my internal seams and Moles for external concerns (file and database), I exhaustively tested everything I wanted to do as I went, following the red-green-refactor process to the letter. After some hours here and there of working on this (shaking off my TDD rust being part of the time spent), I finally hit F5 and it worked. My files were parsed, my databases updated, my log file written accurately. This wasn’t a fluke, either, because I repeated the process when I wrote another utility that piggy-backed on this first one to send out emails with the results. I wrote an API for setting up custom mail messages, dependency injection for different mail clients, and wireup code for integrating. Never once did I step through the debugger trying to figure out what network credential I had missed or which message was missing which formatting. I put the finishing touches on it with my tests passing and code coverage at 100%, hit F5 and had a well-formed, perfect message in my inbox before I even looked away from my screen, expecting to be kicked into the debugger. That is a remarkable pro.

I mean, think of that. If you’re a skeptic about TDD or unit testing in general, you’ve probably never experienced anything like that. If you have, that’s amazing. I don’t know anyone that doesn’t develop in this fashion that has this happen. I mean, people who are really sharp get it mostly right, and sometimes just have to tweak some things here and maybe step through a few functions a few times, but without ever running the code? I’m not buying it. And even if they did, I come out way ahead this way. A non-tested code base is going to be harder to modify. I have hundreds of passing tests now that will tell me if subsequent changes break anything.

Cons Mitigated

So, I’d like to take a look back at the cons, and revisit my opinions after a month of doggedly sticking with TDD:

  1. Larger design concepts do not seem to be emergent the way they might be in a more shoot from the hip approach — figuring out when you need a broader pattern seems to require stepping out of the TDD frame of mind
  2. There is no specific element of the process that naturally pushes me away from the happy path — you wind up creating tests for exceptional/edge cases the way you would without TDD (I guess that’s not strictly a con, just a lack of pro).
  3. It becomes easier to keep plodding away at an ultimately unworkable design – it’s already hard to throw work you’ve put in away, and now you’re generating twice as much code.
  4. Correcting tests that have been altered by changing requirements is magnified since your first code often turns out not to be correct — I find myself refactoring tests as often as code in the early going.
  5. Has the potential to slow development when you’re new to the process — I suppose one might look at this as a time investment to be paid off when you get more fluid and used to doing it.

Regarding (1) and (3), this seems not to be the issue that I thought it would be. I had grown used to a fast flurry trying out a class concept, and realizing what refactorings needed to be done on the fly, causing it to morph very quickly. I had gotten very proficient at ongoing refactoring without the red-green preceding it, so, in this way, I was able to ‘fake’ the TDD benefit. However, I realized a paradoxical benefit – TDD forced me to slow down at the beginning. But rather than really slowing me down on the whole, it just made me toss out fewer designs. I still refactor with it, but it isn’t necessary as often. The TDD forces me to say “what do I want out of this method/class” rather than to say “what should this do and how should it relate to others”. The difference is subtle, but interesting. The TDD way, I’m looking at each method as an atom and each class as a unit, whereas the other way I’m putting the cart a bit before the horse and looking at larger interactions. I thought this was a benefit, but it resulted in more scratch out work during prototyping than was really necessary. I was concerned that TDD would force me to throw away more code because I’d be tossing tests as well as code, but in reality, I just toss less code.

Regarding (2), this has sorted itself out as well. TDD does tend to lead you down the happy path as you’re using it to incrementally correct and refine a method’s behavior, but there’s nothing that stops you from tossing in along with that a thought of “how do I want this to handle an exception in a method that it calls?” This, I’m finding is a discipline and one that I luckily brought with me from testing after or testing during. It is not incompatible with TDD, though TDD does help me not get carried away focusing on corner cases (which can be ferreted out with tools like Pex or smoke tests later).

Regarding (4), this has also proved not to be too much of an issue, though I have seen a touch of this. But, the issue really comes in when it turns out that I don’t need some class, rather than a swath of tests. TDD helps my classes be better thought out and more decoupled, and so, I’ve found that when I’m cutting things, it tends to be surgical rather than a bloodbath.

Regarding (5), this is true, but I’ve fought through the burn. However, there is another thing that goes on here that I’d point out to people considering a foray into TDD. Imagine the scenario that I described above, and that you’re the non-TDD hare and I’m the TDD tortoise. I’m plodding along, red-green-refactoring my parser when you’ve finished yours. You’re probably running it on an actual file and making changes while I’m still working methodically. By the time I’m finished with my parser and doing the formatting, you’ve got your formatting finished and are working on the database. By the time I get to the database, you’ve already got things working 90%, and are busy stepping through the debugger and cursing the OleDbCommand API.

But, that’s where things start to get interesting. Often times, in my experience, this is the part of development that has you at the office late, pulling your hair out. The application is 90% done at 4:00, and you’re happy because you just need to make a minor tweak. Suddenly, it’s 8:00 and you’re hungry and cranky and cursing because no one is left in the office. That’s a brutal 4 hours specifically because you imagined in to be 20 minutes at 4:00. But, let’s say you finally get it and go home. I’ll finish an hour or so after you and you win the first leg of the race, though I’ll do it without a lot of fanfare and frustration, since I’m never hitting snags like that with my methodical process.

But now, an interesting thing happens. We send our versions out for Beta testing, and the issues start coming back. You have your own knowledge of your code to go on as you fix defects and perform minor refactorings, while I have 100% code coverage. I quickly find issues, fix them, see all green and am ready to deliver. You quickly find issues, fix them, and then run the application a bunch of times in different scenarios trying to ferret out anything that you could possibly have broken. I’m gaining on you fast. And, I’m going to keep gaining on you as the software grows. If Beta lasts more than a short time, I’ll win. If there are multiple versions of the software, I’ll win easily. At some point, I’ll lap you as you spend way more time repairing defects you’re introducing with each change than fixing the one you set out to fix. The time investment isn’t new TDD versus seasoned TDD. It’s a little time up front for a lot of time later.

Eating Crow

I had tried TDD in the past, but I think I wasn’t ready or doing it right. Now, I’m convinced that it’s definitely the path for me. Somewhere, I saw a presentation recently that mentioned sort of an arc of unit testing. People tend to start out with test-last, and then move to test during, then test-first and then test-driven, according to this presentation (I wish I remember whose it was so I could link and give credit). I think my previous issue was that I had short circuited the arc and was trying to assimilate too much all at once. If you’re a developer trying to convince others of the merits of unit tests, TDD might be a lot to spring on them up front.

I was thinking about why that might be, and something occurred to me from way back in my undergrad days. The first thing we all tend to do is want to write all of the code at once. Since we’re given small assignments to start, this works. It’s hard to turn bubble sort into a lot of different feedback passes — you write the whole thing and then debug the ‘finished’ product. As classes get harder, you try to keep this dream alive, but at some point you have the “aha” moment that you need to get shorter feedback cycles to prevent assembling the whole thing and getting it all wrong. So, you do this by making your code modular and developing components. That carries you through into the early part of your career, and you have a natural tendency to want to get code up and running as quickly as possible for feedback. Then, TDD comes along and changes the nature of the feedback altogether. You’re no longer getting the feedback from your running application, but from your unit tests of small components. So, if you’re building a house app, you don’t start with the House class and run it. Instead, you start with a “Bedroom” class, or even a “Bed” class. While your counterparts are adding rooms to their running houses, you’re getting your “Bed” class working flawlessly. And, while they’re cobbling furniture into their rooms, you’re assembling rooms from their furniture. When they’re debugging, you’re finally ready for your first “F5” feedback. But, you haven’t come full circle and turned into that freshman with bubble sort – you’ve gotten real feedback every 30 seconds or so. You’ve gotten so much feedback, that there’s little anticipation with the first real run of the component you’re working on for that week. You’re pretty sure it’s going to work and you’re going to be heading home at 5:00.

So, yeah, I guess I’m a TDD convert, and will eat some crow for my earlier posts.

By

Using Moles with the System Assembly

Short but sweet post tonight, and I’m mostly putting this here for reference. It’s annoyed me a few times, and each time I have to google around until I find something like this stack overflow post.

If you add a moles assembly for System, unlike other framework assemblies, building your test project will blow up spectacularly with too many compiler errors to count. I don’t know all the whys and wherefores, but I do know that it’s easily remedied as hveiras says. You simply change:


  

to


  

in your System.Moles file that’s generated. That’s it.

Philosophically, it seems like a bit of an oversight that this would be necessary, as this never fails to fail out of the box. There may be something I’m missing, but it seems like it’d be nice if this attribute were added by default for the System Assembly. In any case, I (and you) have it here for posterity now, though probably the act of explicitly typing it out has rendered it moot for me, as documenting things usually makes me remember them and not need my documentation… which reminds me – I’m probably about due for a periodic re-reading of one of my all time favorite books.

By

Static Methods: Time to Hit Rock Bottom

A Tragic Story

It starts out innocently enough. You’re out at a party with some friends, and some people you want to impress are there. Everyone is having a good time, but you notice that some of the ‘cool’ kids are periodically going into a room and coming out giggling. Not wanting to be left out, you ask what’s going on, and are told that it’s none of your business. You have some concept that they’re doing something wrong — something forbidden — and you see what a mystique it creates. Later on, you ask around subtly and figure out what’s going on, and the next time you’re at a party, you know the right things to say to get an invitation to that coveted side room. You make your way in nervously, and there, you are introduced to the exhilarating euphoria of casting aside all thoughts of dependency and responsibility and writing a static method. Now, you’re one of the cool kids.

It’s no problem at first. You just write static methods on the weekends sometimes because it feels good to allow all your classes to access some service without passing an instance around. It’s not as if all of your code is procedural, by any stretch. And, yeah, sometimes you get a little weird behavior or hard to track down defects, but you’ve got it under control. Some of your less cool friends tell you that maybe you should think about modeling those behaviors with instances, but you tell them to mind their own business — they wouldn’t understand.

Static methods are just a way of blowing off steam and dealing with stress. When you’ve got project managers riding you to get things done, deadlines to meet and lots of things on your plate, it’s really gratifying to write a static method or two. They’re so easy, so efficient. It helps you get through the day.

But then, something weird starts to happen. They’re not helping you feel good anymore — they’re not cleaning up your code or making it more efficient. You need them just to get by. You have 50 classes that all depend on some static class with static variables, and they need to retrieve and set those variables in just the right order. It’s a mess, and it’s impossible to figure out why you’ve got this stupid defect that the stupid user shouldn’t care about anyway, that only occurs on Windows 7 when two specific screens are showing, the database connection is closing and a window was opened previously. If only you didn’t have all of these stupid global variables in your static classes, this wouldn’t be a problem. If only you’d listened to your friends. The only way out of this static nightmare for even a few minutes is… another static global flag.

Okay, okay, you admit it. This has gotten a little out of hand. So, you go to the project managers and tell them that the code can’t be fixed. It’s time for version 2.0. You’re going to rewrite everything with a much better design, and this time, you swear, you’re only going to use static methods in moderation. You’ll be a lot more responsible with them this time around. You’ve learned your lesson. Yessiree, this time, everything’s going to be different.

But it’s not. You need to hit rock bottom. Only then can you find your way back.

Static Methods are Like an Addiction

It never seems to fail in a code base of decent size. Poke around the portions of the code that tend to cause the most issues and be hardest to reason about, and the ‘static’ keyword will tend to abound. The worse it is, the more of them you’ll see. People know that global variables are bad, and it is becoming widely accepted that singletons tend to be problematic and abused (if one doesn’t believe use counts as abuse). People generally concur that static state is problematic. And yet, it keeps on keeping on. More is introduced every day. Here are some observations that I’ve made about the use of statics in code:

  1. Most people rationalize their use.
  2. Time constraints and stress are often cited as justifications.
  3. Promises to do something about them at a nebulous future date abound.
  4. People continue using them in spite of being aware of demonstrable, negative effect.
  5. The ‘solution’ for the problems they’ve caused is often using more of them.
  6. People get defensive when asked about them.

One blogger that I follow, Julian Birch, compares them to crack in a post entitled “Going Cold Turkey on Static Methods”:

Static methods are like crack, they’re convenient, they solve your problems quickly and you quickly become dependent upon them. You’re so dependent you don’t even know you’ve got a problem.

I hadn’t seen this particular post when I started mine, but it was nice to google around a bit and see that I’m not the only one to whom this particular metaphor had occurred.

Emerging from the Metaphor

The metaphor here is just that – a literary device intended for illustrative effect in demonstrating a point. I’m not actually suggesting that static methods will ruin your life, and I’m not attempting to trivialize the struggles that people have with actual addiction. I’m posting this because of my observation of the tendency of static methods in a code base to become a real, huge problem, that people immersed in the code base don’t see, but outside observers are immediately alarmed by.

Static state/methods in a code base generally represent a tradeoff, but a subtly different one, in my opinion, than their advocates would have you believe. They claim a tradeoff between complex instantiation logic and complex runtime logic. I claim a tradeoff of a little extra reasoning up front for an avalanche of future problems. To me, introducing a bunch of static functionality into a code base is like saying “I’m making a conscious choice to avoid all the effort needed to plug in my sump pump, and if my basement floods with sewage, I’m fine with that.” What?!? Seriously?!?

Rationale

In a post with a different device theme, the pun, John Sonmez says:

I don’t like static methods. They make me cringe. In the universe of OO static methods are anti-matter.

He goes on to talk about specific issues with the concept of static methods, including:

  1. A tendency to violate the SRP since they are methods on a class that have nothing to do with the state of that class.
  2. They are global procedures (as in procedural code) that are categorically not object oriented programming.
  3. They kill testability of code bases. Misko Hevery agrees.
  4. They are not polymorphic in that they cannot inherit, be inherited, or implement interfaces.
  5. They force programmers to know more things up front.

I would add to this list:

  1. They hide dependencies and allow an illusion of simplicity in APIs to be created (they cause your API to ‘lie’.)
  2. They make runtime reasoning about the code very difficult when they encapsulate state.
  3. Simply by existing, they tend to promote global flags if there is a need to extend functionality and not break existing clients.
  4. Static classes tend naturally to balloon in size, since their methods can’t be extended.
  5. They increase potential dependencies in your code base by a factor of order combinatorial.
  6. They cast temporal couplings in stone within your code.

Recovery

If you’ve been reading this and not nodding along, you’ve probably experienced some indignation. Please don’t take this as an assault on the choices you’ve made as a programmer. I’m talking from experience here. Ben Franklin famously said “Experience is the best teacher, but a fool will learn from no other.” I am that fool. I’ve struggled with ugly messes that I (and others) have created via the static concept, and I’ve hit that rock bottom state.

The transition away isn’t as bad as you think. I’ve now regularly been creating entire programs and applications that have no static methods. Zip, zero. It isn’t impossible. Heck, it isn’t even hard. It just requires up front reasoning about your dependency graph, but that reasoning is a skill that sharpens with practice. Like unit testing or TDD, it’s a skill that seems a little unnatural, especially if you started in a procedural language, but it does come, and faster than you think. Before you know it, you’ll be loving your static-free existence and offering counseling to those who are walking your current path through a static nightmare.

By

Resumes and Alphabet Soup

I was reading a blog post by Scott Hanselman this morning. The main premise was a bemused lampooning of the ubiquitous tendency of techies to create a resume “alphabet soup” and a suggestion that Stack Overflow Careers is introducing a better alternative.

I was going to comment on this post inline, but I discovered that I had enough to say to create my own post about it. It seems as though this state of affairs for techies is, ironically, the product of living in a world created by techies. What I mean is that over the last several decades, we have made it our life’s goal to automate processes and improve efficiency. Those chickens have come home to roost in the form of automated resume searching, processing, and recommending.

Alphabet Soup Belongs in a Bowl

Having an item at the bottom of a resume that reads “XML, HTML, JSON, PHP, XSLT, HTTP, WPF, WCP, CSS… etc” is actually quite efficient toward its purpose in the same way that sticking a bunch of keywords in a web page’s hidden meta-data is efficient. It violates the spirit of the activity by virtue of the fact that it’s so good at gaming it as to be “too good”. As a job seeker, if I want to create the largest opportunity pool, it would stand to reason that I should include every three and four character combination of letters in existence somewhere in the text of my resume (“Strongly proficient in AAA, AAB, AAC, …ZZZZ”). And, while most of these ‘technologies’ don’t exist and I probably haven’t used most of the ones that do, this will cast a wider net for my search than not including this alphabet soup. I can always sort out the details later once my foot is in the door. Or, so the thinking probably goes (I’m not actually endorsing, in any way, resume exaggeration or the SPAMing the resume machine — just using first person to illustrate a point).

We, the techies of the world, have automated the process of matching employers with employees, and now, we are forced to live in a world where people attempt to game the system in order to get the edge. So, what’s the answer? A declaration that companies should stop hiring this way? Maybe, but that seems unlikely. A declaration that people should stop creating their resumes this way because it’s silly? That seems even more unlikely because the ‘silly’ thing is not doing something that works, and the alphabet soup SPAM works.

I think that this programmer-created problem ought to be solved with better programming. What we’ve got here is a simple text matching algorithm. As a hiring authority, I program my engine (or have the headhunters and sites program it for me) to do a procedure like “give me all the resumes that contain XML, HTML, CSS, and WPF”. I then get a stack of matching resumes from people whose experience in those technologies may range from “expert” to “I think I read about that once on some website” and it’s up to me to figure out which resume belongs to which experience level, generally via one or more interviews.

So, maybe the solution here is to create a search algorithm that does better work. If I were gathering requirements for such a thing, I would consider that I have two stakeholders: company and employee. These two stakeholders share a common goal, “finding a good match”, and also have some mildly conflicting goals — company wants lots of applicants and few other companies and employee wants lots of companies and few other prospective employees. It is also worth considering that the stakeholders may attempt to deceive one another in pursuit of the goal (resume padding on one end or misrepresenting the position on the other end).

With that (oversimplified) goal enumeration in mind, I see the following design goals:

  1. Accuracy for employers. The search engine returns candidates that are better than average at meeting needs.
  2. Accuracy for employees. Engine does not pair them with employers looking for something different than them, thus putting them in position to fail interviews and waste time.
  3. Easy to use, narrative inputs for employees. You type up a summary of your career, interests, experience, projects, etc, and that serves as input to the algorithm – you are not reduced to a series of acronyms.
  4. Easy to use, narrative inputs for employers. You type up a list of the job’s duties at present, anticipated duties in the future, career development path, success criteria, etc and that serves as input to the algorithm.
  5. Opacity/Double blind. Neither side understands transparently the results from its inputs. Putting the text “XML” on a resume has an indeterminate effect the likelihood of getting a job with an employer that wants employees with XML knowledge. This mitigates ‘gaming’ of the system (similar in concept to how search engines guard their algorithms)

Now, in between the narrative input of both sides, the magic happens and pairings are made. That is where we as the techies come in. This is an ambitious project and not an easy one, but I think it can and should happen. Prospective employees tell a story about their career and prospective employers tell a story about a hypothetical employee and matches are created. Nowhere in this does a dumb straight-match of acronym to acronym occur, though the system would take into account needs and skills (i.e a position primarily developing in C++ would yield candidates primarily with good C++ background).

Anyway, that’s just what occurred to me in considering the subject, and it’s clearly far too long for a blog comment. I’m spitballing this here, so comments and opinions are welcome. Also, if this already exists somewhere, please point me at it, as I’d be curious to see it.

(Alphabet soup photo is linked from this post which, by the way, I fully agree with.)