DaedTech

Stories about Software

By

Wasted Talent: The Tragedy of the Expert Beginner

Back in September, I announced the Expert Beginner e-book. In that same post, I promised to publish the conclusion to the series around year-end, so I’m now going to make good on that promise. If you like these posts, you should definitely give the e-book a look, though. It’s more than just the posts strung together — it shuffles the order, changes the content a touch, and smooths them into one continuous story.

But, without further ado, the conclusion to the series:

The real, deeper sadness of the Expert Beginner’s story lurks beneath the surface. The sinking of the Titanic is sharply sad because hubris and carelessness led to a loss of life, but the sinking is also sad in a deeper, more dull and aching way because human nature will cause that same sort of tragedy over and over again. The sharp sadness in the Expert Beginner saga is that careers stagnate, culminating in miserable life events like dead-end jobs or terminations. The dull ache is endlessly mounting deficit between potential and reality, aggregated over organizations, communities and even nations. We live in a world of “ehhh, that’s probably good enough,” or, perhaps more precisely, “if it ain’t broke, don’t fix it.”

There is no shortage of literature on the subject of “work-life balance,” nor of people seeking to split the difference between the stereotypical, ruthless executive with no time for family and the “aim low,” committed family type that pushes a mop instead of following his dream, making it so that his children can follow theirs. The juxtaposition of these archetypes is the stuff that awful romantic dramas starring Katherine Heigl or Jennifer Lopez are made of. But that isn’t what I’m talking about here. One can intellectually stagnate just as easily working eighty-hour weeks or intellectually flourish working twenty-five-hour ones.

I’m talking about the very fabric of Expert Beginnerism as I defined it earlier: a voluntary cessation of meaningful improvement. Call it coasting or plateauing if you like, but it’s the idea that the Expert Beginner opts out of improvement and into permanent resting on one’s (often questionable) laurels. And it’s ubiquitous in our society, in large part because it’s encouraged in subtle ways. To understand what I mean, consider institutions like fraternities and sororities, institutions granting tenure, multi-level marketing outfits, and often corporate politics with a bias toward rewarding loyalty. Besides some form of “newbie hazing,” what do these institutions have in common? Well, the idea that you put in some furious and serious effort up front (pay your dues) to reap the benefits later.

This isn’t such a crazy notion. In fact, it looks a lot like investment and saving the best for last. “Work hard now and relax later” sounds an awful lot like “save a dollar today and have two tomorrow,” or “eat all of your carrots and you can enjoy dessert.” For fear of getting too philosophical and prying into religion, this gets to the heart of the notion of Heaven and the Protestant Work Ethic: work hard and sacrifice in the here and now, and reap the benefits in the afterlife. If we aren’t wired for suffering now to earn pleasure later, we certainly embrace and inculcate it as a practice, culturally. Who is more a symbol of decadence than the procrastinator–the grasshopper who enjoys the pleasures of the here and now without preparing for the coming winter? Even as I’m citing this example, you probably summon some involuntary loathing for the grasshopper for his lack of vision and sobriety about possible dangers lurking ahead.

A lot of corporate culture creates a manufactured, distorted version of this with the so-called “corporate ladder.” Line employees get in at 8:30, leave at 5:00, dress in business-casual garb, and usually work pretty hard or else. Managers stroll in at 8:45 and sometimes cut out a little early for this reason or that. They have lunches with the corporate credit card and generally dress smartly, but if they have to rush into the office, they might be in jeans on a Thursday and that’s okay. C-level executives come and go as they please, wear what they want, and have you wear what they want. They play lots of golf.

There’s typically not a lot of illusion that those in the positions of power work harder than line employees in the sense that they’re down operating drill presses, banging out code, doing data entry, crunching numbers, etc. Instead, these types are generally believed to be the ones responsible for making the horrible decisions that no one else would want to make and never being able to sleep because they are responsible for the business 24/7. In reality, they probably whack line employees without a whole lot of worry and don’t really answer that call as often as you think. Life gets sweeter as you make your way up, and not just because you make more money or get to boss people around. The C-level executives…they put in their time working sixty-hour weeks and doing grunt work specifically to get the sweet life. They earned it through hard work and sacrifice. This is the defining narrative of corporate culture.

But there’s a bit of a catch here. When we culturally like the sound of a narrative, we tend to manufacture it even when it might not be totally realistic. For example, do we promote a programmer who pours sixty hours per week into his job for five years to manager because he would be good at managing people or because we like the “work hard, get rewarded” story? Chicken or egg? Do we reward hard work now because it creates value, or do we manufacture value by rewarding it? I’d say, in a lot of cases, it’s fairly ingrained in our culture to be the latter.

In this day and age, it’s easy to claim that my take here is paranoid. After all, the days of fat pensions and massive union graft have fallen by the wayside, and we’re in some market, meritocratic renaissance, right? Well, no, I’d argue. It’s just that the game has gotten more distributed and more subtle. You’ll bounce around between organizations, creating the illusion of general market merit, but in reality, there is a form of subconscious collusion. The main determining factor in your next role is your last role. Your next salary will probably be five to ten percent more than your last one. You’re on the dues-paying train, even as you bounce around and receive nominally different corporate valuations. Barring aberration, you’re working your way, year in and year out, toward an easier job with nicer perks.

But what does all of this have to do with the Expert Beginner? After all, Expert Beginners aren’t CTOs or even line managers. They’re, in a sense, just longer-tenured grunts that get to decide what source control or programming language to use. Well, Expert Beginners have the same approach, but they aim lower in the org chart and have a higher capacity for self-delusion. In a real sense, management and executive types are making an investment of hard work for future Easy Street, whereas Expert Beginners are making a more depressing and less grounded investment in initial learning and growth for future stagnation. They have a spasm of marginal competence early in their careers and coast on the basis of this indefinitely, with the reward of not having to think or learn and having people defer to them exclusively because of corporate politics. As far as rewards go, this is pretty Hotel California. They’ve put in their time, paid their dues, and now they get to reap only the meager rewards of intellectual indolence and ego-fanning.

In terms of money and notoriety, there isn’t much to speak of either. The reward they receive isn’t a Nobel Prize or a world championship in something. It’s not even a luxury yacht or a star on the Walk of Fame. We have to keep getting more modest. It’s not a six bedroom house with a pool and a Lamborghini. It’s probably just a run-of-the-mill upper middle class life with one nice vacation per year and the prospect of retiring and taking that trip they’ve always wanted, a visit to Rome and Paris. They’ve sold their life’s work, their historical legacy, and their very existence for a Cadillac, a nice set of woods and irons, a tasteful ranch-style house somewhere warm, and trans-Atlantic flight or two in retirement. And that–that willingness to have a low ceiling and that short-changing of one’s own potential–is the tragedy of the Expert Beginner.

Expert Beginners are not dumb people, particularly given that they tend to be knowledge workers. They are people who started out with a good bit of potential–sometimes a lot of it. They’re the bowlers who start at 100 and find themselves averaging 150 in a matter of weeks. The future looks pretty bright for them right up until they decide not to bother going any further. It’s as if Michael Jordan had decided that playing some pretty good basketball in high school was better than what most people did, or if Mozart had said, “I just wrote my first symphony, which is more symphonies than most people write, so I’ll call it a career.” Of course, most Expert Beginners don’t have such prodigious talent, but we’ll never hear about the accomplishment of the rare one that does. And we’ll never hear about the more modest potential accomplishments of the rest.

At the beginning of the saga of the Expert Beginner, I detailed how an Expert Beginner can sabotage a group and condemn it to a state of indefinite mediocrity. But writ large across a culture of “good enough,” the Tragedy of the Expert Beginner stifles accomplishments and produces dull tedium interrupted only by midlife crises. En masse in our society, they’ll instead be taking it easy and counting themselves lucky that their days of proving themselves are long past. And a shrinking tide lowers all boats.

By

Faking An Interface Mapping in Entity Framework

Entity Framework Backstory

In the last year or so, I’ve been using Entity Framework in the new web-based .NET solutions for which I am the architect. I’ve had some experience with different ORMs on different technology stacks, including some roll your own ones, but I have to say that Entity Framework and its integration with Linq in C# is pretty incredible. Configuration is minimal, and the choice of creating code, model or database first is certainly appealing.

One gripe that I’ve had, however, is that Entity Framework wants to bleed its way up past your infrastructure layer and permeate your application. The path of least resistance, by far, is to let Entity Framework generate entities and then use them throughout the application. This is often described, collectively, as “the model,” and it impedes your ability to layer the application. The most common form of this that I’ve seen is simply to have Entity Framework context and allow that to be accessed directly in controllers, code behind or view models. In a way, this is actually strangely reminiscent of an Active Record except that the in memory joins and navigation operations are a lot more sophisticated. But the overarching point is still the same — not a lot of separation of functionalities and avoidance of domain modeling in favor of letting the database ooze its way into your code.

CakeAndEatItToo

I’m not a fan of this approach, and I very much prefer applications that are layered or otherwise separated in terms of concerns. I like there to be a presentation layer for taking care of presenting information to the user, a service layer to serve as the application’s API (flexibility and for acceptance testing), a domain layer to handle business rules, and a data access layer to manage persistence (Entity Framework actually takes care of this layer as-is). I also like the concept of “persistence ignorance” where the rest of the application doesn’t have to concern itself where persistent data is stored — it could be SQL Server, Oracle, a file, a web service… whatever. This renders the persistence model and ancillary implementation detail which, in my opinion, is what it should be.

A way to accomplish this is to use the “Repository Pattern,” in which higher layers of the application are aware of a “repository” which is an abstraction that makes entities available. Where they come from to be available isn’t any concern of those layers — they’re just there when they’re needed. But in a lot of ways with Entity Framework, this is sort of pointless. After all, if you hide the EF-generated entities inside of a layer, you don’t get the nice query semantics. If you want the automation of Entity Framework and the goodness of converting Linq expressions to SQL queries, you’re stuck passing the EF entities around everywhere without abstraction. You’re stuck leaking EF throughout your code base… or are you?

Motivations

Here’s what I want. I want a service layer (and, of course, presentation layer components) that is in no way whatsoever aware of Entity Framework. In the project in question, we’re going to have infrastructure for serializing to files and calling out to web services, and we’re likely to do some database migration and using of NoSQL technologies. It is a given that we need multiple persistence models. I also want at least two different version of the DTOs: domain objects in the domain layer, hidden under the repositories, and model objects for binding in the presentation layer. In MVC, I’ll decorate the models with validation attributes and do other uniquely presentation layer things. In the WPF world, these things would implement INotifyPropertyChanged.

Now, it’s wildly inappropriate to do this to the things generated by EntityFramework and to have the domain layer (or any layer but the presentation layer) know about these presentation layer concepts: MVC Validation, WPF GUI events, etc. So this means that some kind of mapping from EF to models and vice-versa is a given. I also want rich domain objects in the domain layer for modeling business logic. So that means that I have two different representations of any entity in two different places, which is a classic case for polymorphism. The question then becomes “interface implementation or inheritance?” And I choose interface.

My reasoning here is certainly subject to critique, but I’m a fan of creating an assembly that contains nothing but interfaces and having all of my layer assemblies take a dependency on that. So, the service layer, for instance, knows nothing about the presentation layer, but the presentation layer also knows nothing about the service layer. Neither one has an assembly reference to the other. A glaring exception to this is DTOs (and, well, custom exceptions). I can live with the exceptions, but if I can eliminate the passing around of vacuous property bags, then why not? Favoring interfaces also helps with some of the weirdness of having classes in various layers inherit from things generated by Entity Framework, which seems conspicuously fragile. If I want to decorate properties with attributes for validation and binding, I have to use EF to make sure that these things are virtual and then make sure to override them in the derived classes. Interfaces it is.

Get Ur Dun!

So that’s great. I’ve decided that I’m going to use ICustomer instead of Customer throughout my application, auto-generating domain objects that implement an interface. That interface will be generated automatically and used by the rest of the application, including with full-blown query expression support that gets translated into SQL. The only issue with this plan is that every google search that I did seemed to suggest this was impossible or at least implausible. EF doesn’t support that, Erik, so give it up. Well, I’m nothing if not inappropriately stubborn when it comes to bending projects to my will. So here’s what I did to make this happen.

I have three projects: Poc.Console, Poc.Domain, and Poc.Types. In Domain, I pointed an EDMX at my database and let ‘er rip, generating the T4 for the entities and also the context. I then copied the Entity T4 template to the types assembly, where I modified it. In types, I added an “I” to the name of the class, changed it to be an interface instead of a class, removed all constructor logic, removed all complex properties and navigation properties, and removed all visibilities. In the domain, I modified the entities to get rid of complex/navigation properties and added an implementation of the interface of the same name. So at this point, all Foo entities now implement an identical IFoo interface. I made sure to leave Foo as a partial because these things will become my domain objects.

With this building, I wrote a quick repository POC. To do this, I installed the nuget package for System.Dynamic.Linq, which is a really cool utility that lets you turn arbitrary strings into Linq query expressions. Here’s the repository implementation:

public class Repository<TEntity, TInterface> where TEntity : class, new() where TInterface : class
{
    private PlaypenDatabaseEntities _context;

    /// 

    /// Initializes a new instance of the Repository class.
    ///

/// public Repository(PlaypenDatabaseEntities context) { _context = context; } public IEnumerable Get(Expression<Func<TInterface, bool>> predicate = null) { IQueryable entities = _context.Set(); if (predicate != null) { var predicateAsString = predicate.Body.ToString(); var parameterName = predicate.Parameters.First().ToString(); var parameter = Expression.Parameter(typeof(TInterface), predicate.Parameters.First().ToString()); string stringForParseLambda = predicateAsString.Replace(parameterName + “.”, string.Empty).Replace(“AndAlso”, “&&”).Replace(“OrElse”, “||”); var newExpression = System.Linq.Dynamic.DynamicExpression.ParseLambda<TEntity, bool>(stringForParseLambda, new[] { parameter }); entities = entities.Where(newExpression); } foreach (var entity in entities) yield return entity as TInterface; } }

Here’s the gist of what’s going on. I take an expression of IFoo and turn it into a string. I then figure out the parameter’s name so that I can strip it out of the string, since this is the form that will make ParseLambda happy. Along these same lines, I also need to replace “AndAlso” and “OrElse” with “&&” and “||” respectively. The former format is the how expressions are compiled, but ParseLambda looks for the more traditional expression combiners. Once it’s in a pleasing form, I parse it as a lambda, but with type Foo instead of IFoo. That becomes the expression that EF will use. I then query EF and cast the results back to IFoos.

Now, I’ve previously blogged that casting is a failure of polymorphism. And this is like casting on steroids and some hallucinogenic drugs for good measure. I’m not saying, “I have something the compiler thinks is an IFoo but I know is a Foo,” but rather, “I have what the compiler thinks of as a non-compiled code scheme for finding IFoos, but I’m going to mash that into a non-compiled scheme for finding Foos in a database, force it to happen and hope for the best.” I’d be pretty alarmed if not for the fact that I was generating interface and implementation at the same time, and that if I define some other implementation to pass in, it must have any and all properties that Entity Framework is going to want.

This is a proof of concept, and I haven’t lived with it yet. But I’m certainly going to try it out and possibly follow up with how it goes. If you’re like me and were searching for the Holy Grail of how to have DbSet<IFoo> or how to use interfaces instead of POCOs with EF, hopefully this helps you. If you want to see the T4 templates, drop a comment and I’ll put up a gist on github.

One last thing to note is that I’ve only tried this with a handful of lambda expressions for querying, so it’s entirely possible that I’ll need to do more tweaking for some edge case scenarios. I’ve tried this with a handful of permutations of conjunction, disjunction, negation and numerical expressions, but what I’ve done is hardly exhaustive.

Happy abstracting!

Edit: Below are the gists:

  • Entities.Context.tt for the context. Modified to have DBSets of IWhatever instead of Whatever.
  • Entities.tt for the actual, rich domain objects that reside beneath the repositories. These guys have to implement IWhatever.
  • DTO.tt contains the actual interfaces. I modified this T4 template not to generate navigation properties at all because I don’t want that kind of rich relationship definition part of an interface between layers.
By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

Decision Points in Programming

I have a sort of personality quirk that causes me to constantly play what I describe to others as the “what-if game.” This is where I have some kind of oddball thought about altering something that we take for granted and imagining how it plays out. Lest you think that I’m engaging in fatuous self-aggrandizing, I’m not talking about some kind of fleeting stoner thought like “what if I had like eight million Doritos and also X-ray vision?!?” I mean that I actually really start to think strange things through in detail.

For example, not too long ago I was in an elevator and thought to myself, “would I ride this elevator if I knew that there was a 1 in 10,000 chance that the elevator would plummet to the bottom of the elevator shaft?” I thought that I would. I was going up a ways and the odds were in my favor. I then thought to myself that this was a rational choice but viscerally insane — why take the chance?

This led to the thought “what if elevators around the world just suddenly had those odds of that outcome for some reason, and it was intrinsic to the nature of elevators?” Meaning, nothing we could do would possibly fix it. Elevators are now synonymous with 1 in 10,000 plummets. How does the world react?

It’s a wild thing to think about, but the predictive possibilities and analysis are endless. First of all, we’ve got all of these tall buildings, so it’s not as though we’d just leave everything in them and become brownstone dwellers. Some brave souls would go up to get things from these buildings and keep playing the odds. The property value of high-rises would immediately plummet, and you’d probably invert the real-estate structure nearly overnight with suburban/country home prices skyrocketing and swanky downtown high-rises becoming where extremely poor people and drug addicts lived (who else would routinely brave the odds?) I think the buildings would still stand because of the sheer amount of elevation required to knock them down and the fact that we actually develop quite a tolerance for risky things (like driving to work every day).

There’d also be odd anthropological effects. I’d imagine that a whole generation of teenage thrill seekers and death defiers would start doing elevator joy rides to prove their mettle. People would develop all kinds of cargo cult ways to stand or sit in the elevators with a mind toward simply surviving the plummets. In fact, perhaps humankind would just become really good at making the plummets survivable. Politically, I’d imagine that a huge wedge issue debate would emerge about freedom to ride elevators versus the sanctity of life or something. I could go on forever about this, but I’ll have mercy and stop now.

The point is that I take these mental trips several times per day, considering a whole variety of topics. Most of the thoughts that emerge are bizarre and beneficial only as exercises in creativity, like the elevator example, but some are genuine ideas for reboots in thinking about our craft. I find the exercise of indulging these mental divergences and quasi-daydreams to be a good way to get the subconscious brain working on perhaps more immediate problems.

So if you’re up for it, I invite you to have a go at this sort of thinking, but perhaps in a more structured sense. At times in the history of programming, decisions were made or ideas proposed that wound up having a profound effect on the industry. Imagine a world where these had gone differently:

  1. Tony Hoare introduced what we later called “the billion dollar mistake” — he implemented the concept of 0 as a null reference. But what if there were no null?
  2. A lot of what we do to this day as programmers has its roots in decisions made for the typewriter: for example, the QWERTY keyboard and using CR/LF for end of line. What if these conventions had been different when the computer started to take off?
  3. Edsger Dijkstra famously swung the tide against the use of GOTO as a programming language construct with a seminal paper. What if it had popularly stuck around to this day and GOTO statements were still something we thought about a lot?
  4. Of the three programming paradigms (structured, object-oriented, and functional), functional is the oldest, but it lied dormant for 40 years or so before gaining serious popularity today. What would the world look like if it had been the most popular from the get-go and stayed that way?
  5. C++ really took OOP mainstream, but it did it in a language that was effectively a superset of C, a non OOP language. This allowed for the continuation of a very procedural style of programming in an OOP language. What if that cut had just been made cleanly?
  6. What if the most popular object oriented languages didn’t have the concept of “static” and everything had to belong to an instance?
  7. What if Javascript had been carefully planned in an enterprise-y way, instead of thrown together in 10 days?
  8. If disk space had been as cheap as it was and the need for stored information rather than calculation had been higher, would the RDBMs as we know it ever have become popular?

Thinking through these things might just be a random exercise in imagination. But, who knows, it may give you an oblique solution to a problem you’ve been mulling over or a different philosophical approach to some aspect of programming. Things that we do, even highly conventional or traditional ones, are always fair game for reevaluation.

By

Create a Windows Share on Your Raspberry Pi

If I had to guess at this blog’s readership demographic, I’d imagine that the majority of readers work mainly in the .NET space and within the Microsoft technology ecosystem in general. My background is a bit more eclectic, and, before starting with C# and WPF full time in 2010, I spent a lot of years working with C++ and Java in Linux. As such, working with a Raspberry Pi is sort of like coming home in a way, and I thought I’d offer up some Linux goodness for any of you who are mainly .NET but interested in the Pi.

One thing that you’ve probably noticed is that working with files on the Pi is a bit of a hassle. Perhaps you use FTP and something like Filezilla to send files back and forth, or maybe you’ve gotten comfortable enough with the git command line in Linux to do things that way. But wouldn’t it be handy if you could simply navigate to the Pi’s files the way you would a shared drive in the Windows world? Well, good news — that’s what Samba is for. It allows your Linux machines to “speak Windows” when it comes to file shares.

Here’s how to get it going on your Pi. This assumes that you’ve setup SSH already.

  1. SSH into your Raspberry Pi and type “sudo apt-get install samba” which will install samba.
  2. Type “y” and hit enter when the “are you sure” prompt comes up telling you how much disk space this will take.
  3. Next do a “sudo apt-get install samba-common-bin” to install a series of utilities and add-ons to the basic Samba offering that are going to make working with it way easier as you use it.
  4. Now, type “sudo nano /etc/samba/smb.conf” to edit, with elevated permissions, the newly installed samba configuration file.
  5. If you go navigate to your pi’s IP address (start, run, “\\piip”), you’ll see that it comes up but contains no folders. That’s because samba is running but you haven’t yet configured a share.
  6. Navigate to the line in the samba configuration file with the heading “[homes]” (line 244 at the time of this writing), and then find the setting underneath that says “browseable = no”. This configuration says that the home directories on the pi are not accessible. Change it to yes, save the config file, and observe that refreshing your file explorer window now shows a folder: “homes.” Cool! But don’t click on it because that won’t work yet.
  7. Now, go back and change that setting back under homes because we’re going to set up a share a different way for the Pi. I just wanted to show you how this worked. Instead of tweaking one that they provide by default, we’re going to create our own.
  8. Add the following to your smb.conf file, somewhere near the [homes] share.PiSamba
  9. Here’s what this sets up. The name of the share is going to be “pi” and we’re specifying that it can be read, written, browsed. We’re also saying that guest is okay (anyone on the network can access it) and that anyone on the network can create files and directories. Just so you know, this is an extremely permissive share that would probably give your IT/security guy a coronary.
  10. Now, go refresh your explorer window on your Windows machine, and viola!
    SambaWindows
  11. If some of the changes you make to samba don’t seem to go through, you can always do “sudo service samba restart” to stop and restart samba to make sure your configuration changes take effect. That shouldn’t have been strictly necessary for this tutorial, but it’s handy to know and always a good first troubleshooting step if changes don’t seem to go through.

And that’s it. You can now edit files on your Pi to your heart’s content from within Windows as well as drag-and-drop files to/from your Pi, just as you would with any Windows network share. Happy Pi-ing!

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

Years of Lessons Learned from Home Automation

I’ve had three variations of my home automation setup. The first incarnation was a series of Linux command line utilities and cron jobs. There was some vague intention of a GUI, but that never really materialized. The second was a very enterprise-y J2EE implementation that involved Spring, MongoDB, layered architecture, and general wrangling with java. The current and most recent reboot involves nodes in a nod to Service Oriented Architecture and the “Internet of Things.” As I mentioned in my last post, the I’m turning a Raspberry Pi into a home automation controlling REST endpoint and then letting access and interaction happen in more distributed, ad-hoc fashion.

The flow of these applications seems to reflect the trajectory of my career from entry level developer to architect — from novice and hobbyist to experienced professional and, well, still hobbyist. And I think that, to an extent, they also reflect the times and trends in technology. It’s interesting to reflect on it.

When I started out as a programmer in the working world, I was doing a lot in the Linux world with C and C++. In that world, there was no cred to writing any kind of GUI — it was all about being close to the metal, and making things work behind the scenes. GUIs were for the faint of heart. I wrote drivers and kernel space code and automated various interactions between hardware and software. This mentality was carried over into the world of hobby when I discovered home automation. X10 was the province of hobbyist electrical engineers who wrote code out of necessity, and I fell in nicely with this approach. It was all about banging away, hacking, and making things work. Architecture, planning, testing, deployment strategies, etc… who cares? Making it work was all that mattered. I was a beginner.

As my career wound on, I started doing more and different kinds of programming. I found my way into web development with Java, did things in the .NET space, worked with databases, and started to become interested in architecture, software processes and honing my craft. With my newfound knowledge of a breadth of technologies and better software development approaches, I decided on a home automation reboot. I chose Linux and Java to keep the budget as shoe-string as possible. For a server, I could use the machine I took with me to college — a 400 MHz P2 processor and 384 meg of RAM. The hardware, OS, and software were thus all free, and all I had to do was pop for the X10 modules at $10-$20 a piece. Not too shabby.

I was cost conscious, and I had a technical vision for the architecture. I knew that if I created a web application on the server that what I did would be accessible from anywhere: Windows computers, Linux computers, even cell phones (which were a lot more limited as nodes in a network 5-6 years ago when I started laying this out). Java was a good choice because it gave me a framework to integrate all of the different functionality that I could imagine. And I imagined plenty of it.

There was no shortage of gold plating. Part of this was because I was interested in learning new technologies as long as I was doing hobby work and part of this was because I hadn’t yet learned the value of limiting myself to the minimum set of features needed to get going. I had advanced technically enough to see the value in architecture and having a plan for how I’d handle future added features, but I hadn’t advanced enough to keep the system flexible without putting more in up front than I needed. A web page with a link for turning a lamp on may not need data access, domain, service, and presentation layers. And, while I had grand plans to integrate things like home inventory management, recipe tracking, a family calendar and more, those never actually materialized due to how busy I tend to be. But I was practicing my craft and teaching myself these concepts by exploring them, so I don’t look back ruefully. Lesson learned.

Now, I’m rebooting. My old P2 machine is dying slowly but surely, and I recently purchased a lake house where I want to replicate my setup. I don’t have another ancient machine, and it’s time to get more repeatable anyway. A minimal REST endpoint on a Raspberry Pi is cheap and repeatable, and it lets me build the system in my house(s) more incrementally and flexibly. If I want to use WPF to build a desktop app for controlling the thing, then great. If I want to use PHP or Java on a server, then also great. ASP MVC, whatever. Anything that can speak REST will work, and everything speaks REST.

Maybe in another three years, I’ll do the fourth reboot and think of how silly I was “back then” in 2013. But for now, I’ll take the lessons that I’ve learned in my reboots and reflect. I’ve learned that software is about solving problems for people, not just for the sake of solving the problem. A cron job that I can tweak turns my lights on and off, but it doesn’t present any way that the system isn’t weird and confusing for my non-technical girlfriend. I’ve learned that building more than what you need right now is a guarantee that you’ll have more complexity than you need and less benefit. I’ve learned that a system composed of isolated, modular components is better than a monolithic juggernaut that can handle everything. And, most importantly, I’ve learned that you’ve never really got it all figured out; whatever grand plan you have right now is going to need constant care and refinement.