DaedTech

Stories about Software

By

Linq Order By When You Have Property Name

Without reflection, we go blindly on our way, creating more unintended consequences, and failing to achieve anything useful.
–Margaret J. Wheatley

Ordering By a Column Name

Quick tip today in case anyone runs into this.  Frequently you have some strongly typed object and you want to order by some property on that object.  No problem — Linq’s IEnumerable.OrderBy() to the rescue.  But what about when you don’t have a strongly typed object at runtime and you only have the property’s name?

In a little project I’m working on at the moment, this came up. In this project, I’m parsing SQL queries (a subset of SQL, anyway) and translating these queries into web service requests for Autotask. All of the Autotask web service’s entities are children of a base class simply called Entity. Entities have ids in common, but little else. So the situation is that I’m going to get a query of the form “SELECT * FROM Account ORDER BY AccoutName” (i.e. just a string) and I’m going to have to pull out of the API a series of strongly typed objects and figure out how to sort them by “AccountName” at runtime. Tricky part is that I don’t know at compile time what object type I’ll be getting back, much less which property on that type I’ll be using to sort. So something like entities.OrderBy(e => e.AccountName) is obviously right out.

So what we need is a way of mapping the string to a property and then matching that property to a strongly typed value on the object that can be used for ordering.

private static IEnumerable OrderBy(IEnumerable entities, string propertyName)
{
    if (!entities.Any() || string.IsNullOrEmpty(propertyName))
        return entities;

    var propertyInfo = entities.First().GetType().GetProperty(propertyName, BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance);
    return entities.OrderBy(e => propertyInfo.GetValue(e, null));
}

This method first checks a couple of preconditions: actual value supplied for the property name (obviously) and that any entities exist for sorting. This last one might seem a little strange, but it makes sense when you think about it. The reason it makes sense, if you’ll recall my post on type variance, is that the type of the enumerable is generic and strictly a compile time designation. As such, this method is going to be compiled as IEnumerable rather than IEnumerable or any other derivative.

Now, if you did this:

private static IEnumerable OrderBy(IEnumerable entities, string propertyName)
{
    if (!entities.Any() || string.IsNullOrEmpty(propertyName))
        return entities;

    var propertyInfo = typeof(T).GetProperty(propertyName, BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance);
    return entities.OrderBy(e => propertyInfo.GetValue(e, null));
}

…you would have a problem. Since T is going to be compiled as Entity, you’re going to be looking for properties of the derived class using the type information associated with the base class, which will fail, causing the returned propertyInfo to be null and then a null reference exception on the next line. Since we have no way of knowing at compile time what sort of entity we’re going to have, we have to check at run time. And, in order to do that, we need an actual instance of an entity. If we just have an empty enumerable, this is strictly unknowable.

My solution here is a private static method because I have no use for it (yet) in any other scope or class. But, if you were so inclined you could create an extension method pretty easily:

public static IEnumerable OrderBy(this IEnumerable entities, string propertyName)
{
    if (!entities.Any() || string.IsNullOrEmpty(propertyName))
        return entities;

    var propertyInfo = entities.First().GetType().GetProperty(propertyName, BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance);
    return entities.OrderBy(e => propertyInfo.GetValue(e, null));
}

If you were going to do this, I’d suggest making this method a tad more robust, however as you might get a variety of interesting edge cases thrown at it.

By

A Metaphor to Help You Suck at Writing Software

“No plan survives contact with the enemy” –Helmuth von Moltke the Elder

Bureaucracy 101

Let’s set the scene for a moment. You’re a workaday developer in a workman kind of shop. A “waterfall” shop. (For back story on why I put quotes around waterfall, see this post). There is a great show of force when it comes to building software. Grand plans are constructed. Requirements gathering happens in a sliding sort of way where there is one document for vague requirements, another document for more specific requirements, a third document for even more specific requirements than that, and repeat for a few more documents. Then, there is the spec, the functional spec, the design spec, and the design document. In fact, there are probably several design documents.

There aren’t just the typical “waterfall” phases of requirements->design->code->test->toss over the wall, but sub-phases and, when the organism grows large enough, sub-sub-phases. There are project managers and business managers and many other kinds of managers. There are things called change requests and those have their own phases and documents. Requirements gathering is different from requirements elaboration. Design sub-phases include high-level, mid-level and low-level. If you carefully follow the process, most likely published somewhere as a mural-sized state machine or possibly a Gantt chart unsurpassed in its perfect hierarchical beauty, you will achieve the BUFD nirvana of having the actual writing of the code require absolutely no brain power. Everything will be so perfectly planned that a trained ape could write your software. That trained ape is you, workaday developer. Brilliant business stakeholder minds are hard at work perfecting the process of planning software in such fine grained detail that you need not trouble yourself with much thinking or problem solving.

Dude, wait a minute. Wat?!? That doesn’t sound desirable at all! You wake up in the middle of the night one night, sit bolt upright and are suddenly fundamentally unsure that this is really the best approach to building a thing with software. Concerned, you approach some kind of senior business project program manager and ask him about the meaning of developer life in your organization. He nods knowingly, understandingly and puts one arm on your shoulders, extending the other out in broad, professorial arc to help you share his vision. “You see my friend,” he says, “writing software is like building a skyscraper…” And the ‘wisdom’ starts to flow. Well, something starts to flow, at any rate.

Let’s Build a Software Skyscraper

Like a skyscraper, you can’t just start building software without planning and a lot of upfront legwork. A skyscraper can’t simply be assembled by building floors, rooms, walls, etc independently and then slapping them altogether, perhaps interchangeably. Everything is necessarily interdependent and tightly coupled. Just like your software. In the skyscraper, you simply can’t build the 20th floor before the 19th floor is built and you certainly can’t interchange those ‘parts’ much like in your software you can’t have a GUI without a database and you can’t just go swapping persistence models once you have a GUI. In both cases every decision at every point ripples throughout the project and necessarily affects every future decision. Rooms and floors are set in stone in both location and order of construction just as your classes and modules in a software project have to be built in a certain order and can never be swapped out from then on.Jenga

But the similarities don’t end with the fact that both endeavors involve an inseparable web of complete interdependence. It extends to holistic approaches and cost as well. Since software, like a skyscraper, is so lumbering in nature and so permanent once built, the concept of prototyping it is prima facie absurd. Furthermore, in software and skyscrapers, you can’t have a stripped-down but fully functional version to start with — it’s all or nothing, baby. Because of this it’s important to make all decisions up-front and immediately even when you might later have more information that would lead to a better-informed decision. There’s no deferral of decisions that can be made — you need to lock your architecture up right from the get-go and live with the consequences forever, whatever and however horrible they might turn out to be.

And once your software is constructed, your customers better be happy with it because boy-oh-boy is it expensive, cumbersome and painful to change anything about it. Like replacing the fortieth floor on a skyscraper, refactoring your software requires months of business stoppage and a Herculean effort to get the new stuff in place. It soars over the budget set forth and slams through and past the target date, showering passerby with falling debris all the while.

To put it succinctly in list form:

  1. There is only one sequence in which to build software and very little opportunity for deviation and working in parallel.
  2. Software is not supposed to be modular or swappable — a place for everything and everything in its place
  3. The concept of prototyping is nonsensical — you get one shot and one shot only.
  4. It is impossible to defer important decisions until more information is available. Pick things like database or markup language early and live with them forever.
  5. Changing anything after construction is exorbitantly expensive and quite possibly dangerous

Or, to condense even further, this metaphor helps you build software that is brittle and utterly cross-coupled beyond repair. This metaphor is the perfect guide for anyone who wants to write crappy software.

Let’s Build an Agile Building

Once you take the building construction metaphor to its logical conclusion, it seems fairly silly (as a lot of metaphors will if you lean too heavily on them in their weak spots). What’s the source of the disconnect here? To clarify a bit, let’s work backward into the building metaphor starting with good software instead of using it to build bad software.

AgileBuildingA year or so ago, I went to a talk given by “Uncle” Bob Martin on software professionalism. If I could find a link to the text of what he said, I would offer it (and please comment if you have one) but lacking that, I’ll paraphrase. Bob invited the audience to consider a proposition where they were contracting to have a house built and maintained with a particular contractor. The way this worked was you would give the contractor $100 and he would build you anything you wanted in a day. So, you could say “I want a two bedroom ranch house with a deck and a hot-tub and 1.5 bathrooms,” plop down your $100 and come back tomorrow to find the house built to your specification. If it turned out that you didn’t like something about it or your needs changed, same deal applied. Want another wing? Want to turn the half bath into a full bath? Want a patio instead of a deck? Make your checklist, call the contractor, give him $100 and the next day your wish would be your house.

From there, Bob invited audience members to weigh two different approaches to house-planning: try-it-and-see versus waterfall’s “big design up front.” In this world, would you hire expert architects to form plans and carpenters to flesh them out? Would you spend weeks or months in a “planning phase”? Or would you plop down $100 and say, “well, screw it — I’ll just try it and change it if I don’t like it?” This was a rather dramatic moment in the talk as the listener realized just before Bob brought it home that given a choice between agile, “try it and see” and waterfall “design everything up front” nobody sane would choose the latter. The “waterfall” approach to houses (and skyscrapers) is used because a better approach isn’t possible and not because it’s a good approach when there are alternatives.

Wither the Software-Construction Canard?

Given the push toward Agile software development in recent years and the questionable parallels of the metaphor in the first place, why does it persist? There is no shortage of people who think this metaphor is absurd, or at least misguided:

  1. Jason Haley, “It’s not like Building a House”
  2. Terence Parr, “Why writing software is not like engineereing”
  3. James Shore, “That Damned Construction Analogy”
  4. A whole series of people on stackoverlow
  5. Nathaniel T. Schutta, Why Software Development IS Like Building a House (Don’t let the title fool you – give this one a detailed read)
  6. Thomas Guest, “Why Software Development isn’t Like Construction”

If you google things like “software construction analogy” you will find literally dozens of posts like these.

So why the persistence? Well, if you read the last article, by Thomas Guest, you’ll notice a reference to Steve McConnell’s iconic book “Code Complete.” This book has an early chapter that explores a variety of metaphors for software development and offers this one up. In my first daedtech post I endorsed the metaphor but thought we could do better. I stand by that endorsement not because it’s a good metaphor for how software should be developed but because it’s a good metaphor for how it is developed. As in our hypothetical shop from the first section of the post, many places do use this approach to write (often bad) software. But the presence of the metaphor in McConnell’s book and for years and years before that highlights one of the main reasons for persistence: interia. It’s been around a long time.

But I think there’s another, more subtle reason it sticks around. Hard as it was to find pro posts about the software-construction pairing, the ones I did find share an interesting trait. Take a look at this post, for instance. As “PikeWake” gets down to explaining the metaphor, the first thing that he does is talk about project managers and architects (well, the first thing is the software itself, but right after that come the movers and shakers). Somewhere below that the low-skill grunts who actually write the software get a nod as well. Think about that for a moment. In this analogy, the most important people to the software process are the ones with corner offices, direct reports and spreadsheets, and the people who actually write the software are fungible drones paid to perform repetitive action, rather than work. Is it any wonder that ‘supervisors’ and other vestiges of the pre-Agile, command and control era love this metaphor? It might not make for good software, but it sure makes for good justification of roles. It’s comfortable in a world where companies like github are canning the traditional, hierarchical model, valuing the producers over the supervisors, and succeeding.

Perhaps that’s a bit cynical, but I definitely think there’s more than a little truth there. If you stripped out all of the word documents, Gantt charts, status meetings and other typical corporate overhead and embraced a world where developers could self-organize, prioritize and adapt, what would people with a lot of tenure but not a lot of desire or skill at programming do? If there were no actual need for supervision, what would happen? These can be unsettling, game changing questions, so it’s easier to cast developers as low-skill drones that would be adrift without clever supervisors planning everything for them than to dispense with the illusion and realize that developers are highly skilled, generally very intelligent knowledge workers quite capable of optimizing processes in which they participate.

In the end, it’s simple. If you want comfort food for the mid-level management set and mediocrity, then invite someone in to sweep his arm professorially and spin a feel-good tale about how building software is like building houses and managing people is like a father nurturing his children. If you want to be effective, leave the construction metaphor in the 1980s where it belongs.

By

How to Disable Controls During Postback in ASP

The other day, I was working on a page in a webforms app where a postback, triggered by a button click, kicked off a bit of processing that would run from 10-20 seconds. While this is going on, it makes sense to disable the clicked button and other controls, for that matter. Since the processing occurs on the server, the only way to achieve this effect is by disabling the buttons and other controls on the client side, by using javascript. The following is the series of steps leading up to getting this right. If you just want to see what worked, you can skip to the end.

The first thing I did was find a bit of jquery that would disable things on the page. I put this into the user control in which I was doing this:


From there, I found that the way to distinguish between a server-side click handler (“OnClick” property) and a client-side one was to use OnClientClick, like so:


Here we have some standard button boilerplate, the server side event handler “SearchButton_Click” and the new OnClientClick that triggers javascript invocation and our jquery implementation. I was pretty pumped about this and ready to have my search button disable all client side controls and disable them until the server returned a response. I fired it up, clicked the search button, and absolutely nothing happened. Not only was nothing disabled, but there was no postback. After some googling around, someone recommended adding “return true;” after the disableOnPostback() call. Apparently any intervening client side handler not returning true is assumed to return false which stops the postback. So here is the new attempt:


This had no discernible effect, and after some searching, I found that the meat of the issue here is that disabling the button apparently also disables its ability to trigger a postback. We need to tell the button to fire the postback regardless, which apparently can be accomplished with UseSubmitBehavior=false as a property.


I tried this and, finally, something different! Only problem was that it was a partial success. The disabling of controls finally worked, but the postback never happened. On a hunch, I took out the return true and arrived at my final answer:


This combined with the jquery at the top of the page did the trick. So if you have a button that triggers a postback with a lengthy operation and you want to disable all controls until the operation completes and returns a response, this should do the trick. I am not yet an expert in under-the-covers webforms particulars, so the theory is still a little hazy on my end, but hopefully this helps anyone in a similar position to me. Also, if you are an expect in this stuff, please feel free to weigh in on the theory at play here.

On final thing that I’ll mention is that I did find something called Postback Ritalin during my searches. This seems to offer a control to take care of this for you, though I didn’t really want to introduce any third party dependencies, so I didn’t try anything with it myself.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

Discoverability Instead of Training and Manuals

Documentation and Training as Failures

Some time back, I was listening to someone explain the finer points of various code that he had written when he lamented the lack of documentation and training available for prospective users of this code. I thought to myself rather blithely and flippantly, “why – just write the code so that documenting it and training people to use it aren’t necessary.” I attributed this to being in a peevish mood or something, but reflecting on this later, I thought earnestly, “snarky Erik is actually right about this.”

Think about the way software development generally goes, especially if you’re developing code to server as a framework or utility for teammates and other developers. You start off with clean code and good intentions and you hammer away at making some functional software. Often things go well, but here and there you hit snags and you do a bit of duct-taping and work-around-ing (working around?), vowing to return later to straighten things out. Sometimes you do just that, but other times you realize that time and budget are finite resources for the effort and you reconcile with shipping something that’s not quite perfect.

But you don’t just ship something imperfect, because you’re diligent and responsible. What do you do instead? You go into those nasty areas of the code and you write inline comments, possibly containing apologies. You make sure that the XML/Java doc comments above the methods/classes are quite thorough as well and, for good measure, you probably even writeup some kind of manual or Word document, perhaps with a Visio diagram. Where the code is clear, you let it speak for itself and where it’s less than clear, you document.

We could put this another, perhaps more blunt way: “we generally try to write clean code and we document when we fail to do so.” We might reasonably think of documentation as something that we do when our work and intentions fail to speak for themselves. This seems a bit iconoclast in the face of conventional methods of communicating and processing information. I grew up as a programmer reading through the “man pages” to understand all manner of *Nix command line utilities, system calls, etc. I learned the nitty-gritty of how concepts like semaphores, and IPC and threading worked in this fashion so it seems a bit blasphemous, even to me, to accuse the authors of these APIs at failing to be clear or, really, failing in any way.

And yet, here we are. To be clear, I don’t think that writing code for which clients need to read manuals is a failure of design or of correctness or of a project or utility on the whole. But I do think it’s a failure to write self documenting code. And I think that for decades, we’ve had a culture in which this wasn’t viewed as a failure of any kind. What are we chided to do when we get a new appliance or gadget? Well, read the manual. There’s even an iconic acronym of exasperation for people who don’t do so prior to asking questions: RTFM. In the interest of keeping the blog’s PG rating, I won’t say here what it stands for. In this culture, the engineering particulars and internal mechanisms of things have been viewed as unknowable mysteries and the means by which communication is offered and understanding reached is large and often formidable manuals with dozens of pages of appendices, notes, and works cited. But is that really the best way to do things in all cases? Aren’t there times where it might be a lot better to make something that screamed how it should be used instead of wasting precious time?

Lifejacket_Instructions

Image courtesy of “AlMare” via Wikimedia Commons

A Changing Culture

An interesting thing has happened in recent years, spurred on largely by Apple, initially, and now I’d say by the mobile computing movement in general, since Google and Microsoft have followed suit in their designs. Apple made it cool to toss the manual and assume that it is the responsibility of the maker of the thing, rather than the user, to ensure that understanding is reached. In the development world, champions of clean, self-documenting code have existed prior to whatever Apple might have been doing in the popular market, but the concept certainly got a large, public boost from Apple and its marketing cachet and those who subsequently got on board with the movement.

Look at the current state of applications being written. This fall, I had the privilege of attending That Conference, Dotnet Rocks Edition and seeing Lwin Maung speak about mobile concepts and the then soon-to-be-released Windows 8 and its app ecosystem. One of the themes of the talk was how apps informed you of how to use them in intuitive ways. You didn’t read a manual to know that the news app had additional content — it told you by leaving the next story link halfway off the side of the screen, practically begging you to paw at it and scroll to the side. The idea of Windows with lots of headers at the top from which you can drill hierarchically into the application is gone and being replaced instead by visual cues that are borderline impossible to screw up.

As this becomes popular in terms of user experience, I submit that it should also become popular with software development. If you find yourself writing some method with the signature DoStuff(bool, bool, int, bool, string, bool) you’ll probably (hopefully) think “man, I better document this because no one will ever figure it out.” But I ask you to take it a step further. If you have the time to document it, then why not spend that time fixing it instead of explaining yourself through documentation? Rename DoStuff to describe exactly what stuff it does, make the parameters significantly fewer, get rid of the Booleans, and make it something that’s pretty much impossible to misunderstand, like string.RemoveCharactersFromTheEnd(6). I bet you don’t need multiple appendices or even a manual to figure out what that does.

Please note that I’m not suggesting that we toss out all previous ways of doing things or stop documenting altogether. Documentation certainly has a time and a place and not all products or APIs are ones that lend themselves to being completely discoverable. What I am suggesting is that we change our culture as developers from “RTFM!!!!” to “could I have made that clearer?” We’ve come a long way as the discipline of programming matures and we have more and more stakeholders who are less and less technical depending on us for more and more things. Communication is increasingly important and communication on clear, broadly understandable terms at that. You’re no longer writing methods being consumed by a handful of fellow geeks that are using your code to put together a BBS about how to program in COBOL. You’re no longer writing code where each byte of memory and disk space is precious so it’s far better to be verbose in voluminous manuals than method or variable names. You’re (for the most part) no longer writing code where optimizing a few cycles trumps readability. You’re writing code in a time when terms like “agile” and “maintainable” reign supreme, there’s no real cost to self-describing code, and the broader popular in general expect their technology to be discoverable. It’s a great time to be a developer — embrace it.

By

Scoping And Accessibility Quirks in C#

As I mentioned recently, I’ve taken to using an inheritance scheme in my approach to unit testing. Because of the mechanics of this scheme, making a class under test internal this morning brought to light two relatively obscure properties of scoping and visibility in C# that you might not be aware of:

  1. Internal can be “less visible” than protected.
  2. Private isn’t always private.

Let me explain by showing the situation in which I found myself. As part of an open source project I’m working on at the moment to allow SQL-like querying of Autotask data through its API, I’ve been writing a set of tests on a class called “SqlQuery” in which I take a SQL statement and parse out the parts I’m interested in:

[TestClass]
public class SqlQueryTest
{
    protected SqlQuery Target { get; set; }

    [TestInitialize]
    public void BeforeEachTest()
    {
        Target = new SqlQuery("SELECT id FROM Account");
    }

    [TestClass]
    public class Columns : SqlQueryTest
    {
        [TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
        public void Contains_One_Element_For_One_Selected_Column()
        {
            Assert.AreEqual(1, Target.Columns.Count());
        }
...

Up until now the class under test, SqlQuery, has been public, but I realize that this is an abstraction that only matters in the actual lower layer assembly rather than at the GUI level, so I made it internal and added an InternalsVisibleTo to the properties of the assembly under test. With that in place, I downgraded the SqlQuery class to internal and was momentarily surprised by a compiler error of “Inconsistent accessibility: property type ‘AutotaskQueryService.SqlQuery’ is less accessible than property ‘AutotaskQueryServiceTest.SqlQueryTest.Target'”.

KoalaWat

On its face, this seems crazy — “internal” is less accessible than “protected”? But when you think about it, this actually makes sense. “Internal” means “nobody outside of this assembly can see it” and protected means “nobody except for this class and its inheritors can see it.” So what happens if I create a third assembly and declare a class in it that inherits from SqlQueryTest? This class has no visibility to the assembly under test and its internals, but it would have visibility to Target. Hence the strange-seeming but quite correct compiler error. One way to get rid of this error is to make SqlQueryTest internal, and that actually compiled and all tests ran, but I don’t like that solution in the event that I want tests in that class and not just its nested children. I decided on another option: making Target private.

If you look at the code snippet above, are you now thinking “but that won’t compile!” After all “Columns” inherits from SqlQueryTest and uses Target and I’ve now just made Target private, so Columns will lose access to it. Well, no, as it turns out. The private scoping in a class means that only the things between the {} of the class can see it. Our nested class here happens to be one of those things. So the scoping trumps the hierarchy in this instance. This can easily be confirmed by changing Target to static and removing the inheritance relationship, which also compiles. The nested class, even when not deriving from the outer class, can access private static members of the outer class.

In the end, my solution here is simple. I make the Target private and move on. But I thought I’d take the opportunity to point out these interesting facets of C# that you probably don’t run across very often.