DaedTech

Stories about Software

By

Top Heavy Department Growth

I’ve been somewhat remiss in answering reader questions lately.  Largely, I’ve lapsed because I’m choosing to focus on my upcoming book.  Nevertheless, I apologize for the lapse.  I do appreciate all the questions you folks send my way.  I’ll try to compensate today with this post about organizations engaging in top heavy department growth.

I’ll paraphrase this reader question because the specificity of the titles and information involved could make it sensitive if I didn’t take a couple of liberties.

I read your article about architect title over-specialization.  I’m a software developer with senior level experience.

Recently, my company has created “levels” above me.  I used to have only a dev manager above me.  But recently, the organization has brought in both new team leads under the dev manager and architects under a different manager.  Both take precedence over the existing developers.  These people now have authority to tell us what to do and they get to choose what they want to work on, leaving us with the leftovers.

I feel as if i’m being promoted down hill. Can you please advise?

How Companies Expand

If you’re up for it, I’ll offer a good bit of background reading to flesh out the terms.  If not, I’ll furnish minimal definitions here for reference.  A while back, I wrote a post describing the company hierarchy.  That post contains excerpts from my upcoming book, which you can pre-order and read on leanpub.

Here you have an apt illustration of the average company.  At the top, in executive roles, you have opportunistic individuals who define (and violate) the rules and culture of the company.  Then, in the middle, sit the idealists, who guzzle that same kool-aid and ask for more.  Finally, at the bottom toil the pragmatists, who roll their eyes at the company but put up with it for lack of better options.

Significantly, pyramids retain their stability by maintaining their shape.  Thus the most stabilizing growth pattern involves rewarding (over-promoting) loyal pragmatists, and hiring a bunch of grunts beneath them.  If you think of an existing pyramid that needs to get larger, you wouldn’t heap stuff on top.  Instead, you’d build from the bottom.  You’d pull some senior developers, make them architects or team leads to reward them hanging around, and hire a bunch of new grunts to report to them.

Read More

By

Using NDepend to Avoid Technical Debt

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.

The term “technical debt” has become ubiquitous in the programming world.  In the most general sense, it reflects the idea that you’re doing something easy in the moment, but that you’re going to pay for, with interest, in the long run.  Conceived this way, to avoid technical debt would mean to avoid taking out these “time loans” in general.

There’s a subtle bit of friction, however, when using the (admittedly very helpful) concept of technical debt to communicate with business stakeholders.  For them, carrying debt is generally a standard operating procedure and often a tool, and it doesn’t have quite the same connotation.  When developers talk about incurring technical debt, it’s overwhelmingly in the context of “we’re doing something ugly and dirty to get this thing shipped, and man are we going to pay for it later.”  That’s a far cry from, “I’m going to finance a fleet of trucks so that we can expand our delivery operation regionally,” that an accountant or executive might understand.  Taking on technical debt is colloquially more akin to borrowing money from a guy that breaks thumbs.

The reason there’s this slight dissonance between the usages is that technical debt in codebases is a lot more likely to be incurred unwittingly (or improvidently).  The reason, in turn, for this could make up the subject of an entire post, but suffice it to say that the developers are often shielded from business decisions and consequences.  It is thus harder for them to be party to all factors of such a tradeoff — a role often played by people with titles like “business analyst” or “project manager.”

In light of this, let’s talk about avoiding the “we break thumbs” variety of tech debt, and how NDepend can help.  This sort of tech debt takes the form of “things you realize probably aren’t great, but you might not realize how long-term damaging they are.”

Read More

By

Rewrite or Refactor?

Editorial Note: I originally wrote this post for the NDepend blog.  You can find the original here, at their site.  While you’re there, take a look at some of the other posts and announcements.  

I’ve trod this path before in various incarnations, and I’ll do it again today.  After all, I can think of few topics in software development that draw as much debate as this one.  “We’ve got this app, and we want to know if we should refactor it or rewrite it.”

For what it’s worth, I answer this question for a living.  And I don’t mean that in the general sense that anyone in software must ponder the question.  I mean that CIOs, dev managers and boards of directors literally pay me to help them figure out whether to rewrite, retire, refactor, or rework an application.  I go in, gather evidence, mine the data and state my case about the recommended fate for the app.

ProjectManager

Because of this vocation and because of my writing, people often ask my opinion on this topic.  Today, I yet again answer such a question.  “How do I know when to rewrite an app instead of just refactoring it?”  I’ll answer.  Sort of.  But, before I do, let’s briefly revisit some of my past opinions.

Getting the Terminology Right

Right now, you’re envisioning a binary decision about the fate of an application.  It’s old, tired, clunky and perhaps embarrassing.  Should you scrap it, write it off, and start over?  Or, should you power through, molding it into something more clean, modern, and adaptable.  Fine.  But, let’s first consider that the latter option does not constitute a “refactoring.”

A while back, I wrote a post called “Refactoring is a Development Technique, Not a Project.”  You can read the argument in its entirety, but I’ll summarize briefly here.  To “refactor” code is to restructure it without altering external behavior.  For instance, to take a large method and extract some of its code into another method.  But when you use “refactor” as an alternative to “rewrite,” you mean something else.

Let’s say that you have some kind of creaky old Webforms application with giant hunks of gnarly code and logic binding the GUI right to the database.  Worse yet, you’ve taken a dependency on some defunct payment processing library that prevents you from updating beyond .NET 2.0.  When you look at this and say, “should I refactor or rewrite,” you’re not saying “should I move code around inside this application or rewrite it?”  Rather, you’re saying, “should I give this thing a face lift or rewrite it?”

So let’s chase some precision in terms here.  Refactoring happens on an ongoing and relatively minor basis.  If you undertake something that constitutes a project, you’re changing the software.  You’re altering the way it interacts with the database, swapping out a dependency, updating your code to a new version of the framework, etc.  So from here forward, let’s call that a reworking of the application.

Read More

By

How to Deliver Software Projects on Time

Editorial Note: I originally wrote this post for the NDepend blog.  You can check out the original here, at their site.  While you’re there, download NDepend and give it a try.

Someone asked me recently, almost in passing, about the keys to delivering software projects on time.  In this particular instance, it was actually a question of how to deliver .NET projects on time, but I see nothing particularly unique to any one tech stack or ecosystem.  In any case, the question piqued my interest, since I’m more frequently called in as a consultant to address issues of quality and capability than slipped deadlines.

To understand how to deliver projects on time (or, conversely, the mechanics of failing to deliver on time) requires a quick bit of term deconstruction.  The concept of “on time” consists of two concerns of software parlance: scope and delivery date.  Specifically, for something to be “on time” there has to be an expectation of what will be delivered and when it will be delivered.

Clipboard

How We Get This Wrong

Given that timeliness of delivery is such an apparently simple concept, we sure do find a lot of ways to get it wrong.  I’m sure that no one reading has to think long and hard to recall a software project that failed to deliver on time.  Slipped deadlines abound in our line of work.

The so-called “waterfall” approach to software delivery has increasingly fallen out of favor of late.  This is a methodology that attempts simultaneously to solve all unknowns through extensive up-front planning and estimation.  “The project will be delivered in exactly 19 months, for 9.4 million dollars, with all of the scope outlined in the requirements documents, and with a minimum standard of quality set forth in the contract.”  This approach runs afoul of a concept sometimes called “the iron triangle of software development,” which holds that the more you fix one concern (scope, cost, delivery date), the more the others will wind up varying — kind of a Heisenburg’s Uncertainty Principle of software.  The waterfall approach of just planning harder and harder until you get all of them right thus becomes something of a fool’s errand.

Let’s consider the concept of “on time” then, in a vacuum.  This features only two concerns: scope and delivery date.  Cost (and quality, if we add that to the mix as a possible variant and have an “iron rectangle”) fails to enter into the discussion.  This tends to lead organizations with deep pockets to respond to lateness in a predictable way — by throwing resources at it.  This approach runs afoul of yet another aphorism in software known as “Brooks’ Law:” adding manpower to a late software project makes it later.

If we accept both Brooks’ Law and the Iron Triangle as established wisdom, our prospects for hitting long range dates with any reliability start to seem fairly bleak.  We must do one of two things, with neither one being particularly attractive.  Either we have to plan to dramatically over-spend from day 1 (instead of when the project is already late) or we must pad our delivery date estimate to such an extent that we can realistically hit it (really, just surreptitiously altering delivery instead of cost, but without seeming to).

Read More

By

Software Architect as a Developer Pension Plan

I’m pretty sure that I’m going to get myself in trouble with this one.  Before I get started and the gnashing of teeth and stamping of feet commence, let me offer an introductory disclaimer here.  What I am about to say offers no commentary on people with the title, “software architect” (a title that I’ve had myself, by the way).  Rather, I offer commentary on the absurd state of software development in the corporate world.

The title “software architect” is silly (mostly because of the parallel to building construction) and the role shouldn’t exist.  Most of the people that hold this title, on the other hand, are smart, competent folks that know how to produce software and have the battle scars to prove it.  We’ve arrived at this paradoxical state of affairs because of two essential truths about the world: the corporation hasn’t changed much in the last century and we software developers have done an utterly terrible job capitalizing on the death grip we have on the world’s economy.

Architect

A Question of Dignity

I’m not going to offer thoughts on how to correct that here.  I’m doing that in my upcoming book.  Today, I’m going to answer a question I heard posed to the Freelancer’s Show Podcast.  Paraphrased from memory, the question was as follows.

I work for a small web development firm.  I was in a meeting where a guy said that he’d worked for major players in Silicon Valley.  He then said that what web and mobile engineers offer a commodity service and that he wanted us to serve as architects, leaving the less-skilled work to be done by offshore firms.  How does one deal with this attitude?  It’s a frustrating and demeaning debate to have with clients.

This question features a lot that we could unpack.  But I want to zero in on the idea of breaking software work into two categories: skilled work and unskilled work.  This inherently quixotic concept has mesmerized business people into poor software decisions for decades.  And it shows no signs of letting up.

Against this backdrop, “major player’s” attitude makes sense.  Like the overwhelming majority of the business world, he believes the canard about dividing work this way.  His view of the unskilled part as a commodity that can be done offshore smacks of business wisdom.  Save the higher-waged, smart people for the smart people work, and pay cheap dullards to do the brainless aspects of software development.

Of course, the podcast listener objects.  He objects to the notion that part of what he does fits into the “cheap commodity” category.  It “demeans” him and his craft.  He understands the complexities of building sites and apps, but his client views these things as simple and best delegated to unskilled grunts.

Why the Obsession with Splitting Software Work?

It bears asking why this thinking seems so persistent in the business world.  And at the risk of oversimplifying for the sake of a relatively compact blog post, I’ll sum it up with a name: Taylor.  Frederick Taylor advanced something simultaneously groundbreaking and mildly repulsive called Scientific Management.  In short, he applied scientific method principles to the workplace of the early 1900s in order to realize efficiency gains.

At first, this sounds like the Lean Startup.  It sounds even better when you factor in that Taylor favored more humanizing methods to get better work out of people than whacking them and demanding that they work harder.  But then you factor in Taylor’s view of the line level worker and you can see the repulsive side.

The labor should include rest breaks so that the worker has time to recover from fatigue. Now one of the very first requirements for a man who is fit to handle pig iron as a regular occupation is that he shall be so stupid and so phlegmatic that he more nearly resembles in his mental make-up the ox than any other type. The man who is mentally alert and intelligent is for this very reason entirely unsuited to what would, for him, be the grinding monotony of work of this character. Therefore the workman who is best suited to handling pig iron is unable to understand the real science of doing this class of work.

Basically, you can split industry into two camps of people: managers who think and imbeciles who labor.  Against this backdrop, the humanizing angle becomes… actually sorta dehumanizing.  Taylor doesn’t think grunts shouldn’t be whipped like horses because it’s dehumanizing, but because it’s not effective.  Better ways exist to coax performance out of the beasts.  Feed them carrots instead of hitting them with sticks.

Depressingly, the enterprise of today looks a lot like the enterprise of 100 years ago: efficiency-obsessed and convinced that middle management exists to assemble humans into bio-machines that needn’t think for themselves.  Nevermind that this made sense for assembling cars and textile manufacture, but not so much for knowledge work projects.  Like the eponymous cargo-culters, modern corporations are still out there waving sticks around in the air and hoping food will drop out of the sky.

Read More