DaedTech

Stories about Software

By

New Theme for DaedTech

As you may have noticed, if you’re not reading this via RSS, I’ve changed the look and feel of the site a bit. I wanted to keep the overall theme of the site while getting a little more modern with the UX, and this is the result. The content portion doesn’t expand to the full width of your screen on higher resolution, things are a little more tiled, and the blog home no longer displays all text, pictures, etc.

I accomplished this by installing the catch box theme and then modifying the CSS to get it to look mostly similar to daedtech as it existed before. I also made modifications to some of the theme’s PHP pages. I looked at a pretty good cross-section of posts and pages to make sure I ironed out all of the kinks, but no doubt some of you will find ones I missed in the first 45 minutes the theme is live, so please feel free to let me know if anything looks weird or wrong to you, either via email or comments.

My hope is that the theme choice and modifications help to improve the site’s performance. I’m also planning to make some additional modifications along those lines to speed things up in the next few weeks as well, time allowing.

Cheers!

By

Introduction to Web API: Yes, It’s That Easy

REST Web Services Are Fundamental

In my career, I’ve sort of drifted like a wraith among various technology stacks and platforms, working on web sites, desktop apps, drivers, or even OS/kernel level stuff. Anything that you might work on has its enthusiasts, its peculiar culture, and its best practices and habits. It’s interesting to bop around a little and get some cross-pollination so that you can see concepts that are truly transcendent and worth knowing. In this list of concepts, I might include Boolean logic, the DRY principle, a knowledge of data structures, the publish/subscribe pattern, resource contention, etc. I think that no matter what sort of programmer you are, you should be at least aware of these things as they’re table stakes for reasoning well about computer automation at any level.

Add REST services to that list. That may seem weird when compared with the fundamental concepts I’ve described above, but I think it’s just as fundamental. At its core, REST embodies a universal way of saying, “here’s a thing, and here’s what you can do to that thing.” When considered this way, it’s not so different from “DRY,” or “data structures,” or “publish/subscribe” (“only define something once,” “here are different ways to organize things,” and, “here’s how things can do one way communication,” respectively). REST is a powerful reasoning concept that’s likely to be at the core of our increasing connectedness and our growing “internet of things.”

So even if you write kernel code or Winforms desktop apps or COBOL or something, I think it’s worth pausing, digressing a little, and understanding a very shallow-dive into how this thing works. It’s worth doing once, quickly, so you at least understand what’s possible. Seriously. Spend three minutes doing this right now. If you stumbled across this on google while looking for an introduction to Web API, skim no further, because here’s how you can create your very own REST endpoint with almost no work.

Getting Started with Web API

Prerequisites for this exercise are as follows:

  1. Visual Studio (I’m using 2012 Professional)
  2. Fiddler

With just these two tools, you’re going to create a REST web service, run it in a server, make a valid request, and receive a valid response. This is going to be possible and stupid-easy by virtue of a framework called Web API.

  1. Fire up Visual Studio and click “New->Project”.
  2. Select “Web” under Visual C# and then Choose “ASP.NET MVC 4 Web Application”
    MVC Project
  3. Now, choose “Web API” as the template and click “OK” to create the project.
    WebApi
  4. You will see a default controller file created containing this class:
    [gist id=”5167466″]
  5. Hit F5 to start IIS express and your web service. It will launch a browser window that takes you to the default page and explains a few things to you about REST. Copy the URL, which will be http://localhost:12345, where 12345 is your local port number that the server is running on.
  6. Now launch fiddler and paste the copied URL into the URL bar next to the dropdown showing “GET” and add api/values after it. Go to the request header section and add “Content-Type: application-json” and press Execute (near top right)
    FiddlerRequest
    (Note — I corrected the typo in the screenshots after I had already taken and uploaded them)
  7. A 200 result will appear in the results panel at the left. Double click it and you’ll see a little tree view with a JSON and “value1” and “value2” under it as children. Click on the “Raw” view to see the actual text of the response.
    FiddlerResponse
  8. What you’re looking at is the data returned by the “Get()” method in your controller as a raw HTTP response. If you switch to “TextView”, you’ll see just the JSON [“value1″,”value2”] which is what this thing will look like to most JSON-savvy parsing tools.

So what’s all of the fuss about? Why am I so enamored with this concept?

Well, think about what you’ve done here without actually touching a line of code. Imagine that I deployed this thing to https://daedtech.com/api/values. If you want to know what values I had available for you, all you’d need to do is send a GET request to that URL, and you’d get a bare-bones response that you could easily parse. It wouldn’t be too much of a stretch for me to get rid of those hard-coded values and read them from a file, or from a database, or the National Weather Service, or from Twitter hashtag “HowToGetOutOfAConversation,” or anything at all. Without writing any code at all, I’ve defined a universally accessible “what”–a thing–that you can access.

We generally think of URLs in the context of places where we go to get HTML, but they’re rapidly evolving into more than that. They’re where we go to get a thing. And increasingly, the thing that they get is dictated by the parameters of the request and the HTTP verb supplied (GET, POST, etc.–I won’t get too detailed here). This is incredibly powerful because we’re eliminating the question “where” and decoupling “how” from “what” on a global scale. There’s only one possible place on earth you can go to get the Daedtech values collection, and it’s obviously at daedtech.com/api/values. You want a particular value with id 12? Well, clearly that’s at daedtech.com/api/values/12–just send over a GET if you want it. (I should note you’ll just 404 if you actually try these URLs.)

So take Web API for a test drive and kick the tires. Let the powerful simplicity of it wash over you a bit, and then let your mind run wild with the architectural possibilities of building endpoints that can talk to each other without caring about web server, OS, programming language, platform, device-type, protocol setup and handshaking, or really anything but the simple, stateless HTTP protocol. No matter what kind of programming you do at what level, I imagine that you’re going to need information from the internet at some point in the next ten years–you ought to learn the basic mechanics of getting it.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

Characterization Tests

The Origins of Legacy Code

I’ve been reading the Michael Feathers book, Working Effectively with Legacy Code and enjoying it immensely. It’s pushing ten years old, but it stands the test of time quite well–probably much better than some of the systems it uses as examples. And there is a lot of wisdom to take from it.

When Michael describes “legacy code,” he isn’t using the definition as you’re probably accustomed to seeing. I’d hazard a guess that your definition would be something along the lines of “code written by departed developers” or maybe just “old, bad code.” But Michael defines legacy code as any code that isn’t covered by automated regression tests (read: unit tests). So it’s entirely possible and common for developers to be writing code that’s legacy code as soon as it’s checked in.

HouseOfCardsI like this definition a lot, and not, as some might suspect, out of any purism. I’m not equating “legacy” with “bad,” embracing the definition as a backhanded way to say that people who don’t develop the way that I do write bad code. The reason I like the “test-less” definition of “legacy code” is that it brings to the fore the largest association that I have with legacy code, which is fear of changing it.

Think about what runs through your head when you’re tasked with making changes to some densely-packed, crusty old system. It’s probably a sense of honest to goodness unease or demotivation as you realize the odds of getting things right are low and the odds of headaches and blame are high. The code is rat’s nest of dependencies and weird work-arounds, and you know that touching it will be painful.

Now consider another situation that’s different but with similar results. You have some assignment that you’ve worked on for weeks or months. It’s complicated, the customer isn’t sure what he wants, there have been lots of hiccups and setbacks, and there’s budget and deadline pressure. At the bitter end, after a few all-nighters, a bit of scope reduction, and some concessions, you somehow finally get all of the key features working for the most part. You check in the code for shipping, thinking, “I have no idea how this is working, but thank God it is, and I am never touching that again!” You’ve written code that was legacy code from the get-go.

Legacy code isn’t just the bad code that the team before you wrote, or some crusty old stuff from three language versions ago, or some internal homegrown VBA and Excel written by Steve, who’s actually an accountant. Legacy code is any code that you don’t want to touch because it’s fragile.

Getting Things Under Control

In his book, Michael Feathers lays out a lot of excellent strategies for taming out-of-control legacy code. I highly recommend giving it a read. But he coins a term and technique that I’d like to mention today. It’s something that I think programmers should be aware of because it helps lower the barriers to getting started with unit testing. And that term is “characterization tests.”

Characterization tests are the “I’m Okay, You’re Okay,” Rorschach approach to documenting code. There are no wrong answers–just documenting the way things exist. So if you have a method called AddTwoNumbers(int, int) and it returns 12 when you feed it 1 and 1, you don’t say “that’s wrong.” Instead you write a test that documents that return value and you move on, seeing and documenting what it does with other inputs.

Sound crazy? Well, it’s really not. It’s not crazy because things like this actually happen in real life. When code goes live, people work their processes around it, however much its behavior may be goofy or unintended. Once code is in the wild, “right” and “wrong” cease to matter, and the requirements as they existed some time in the past are forgotten. There only is “what is.” Sounds very zen, and perhaps it is.

One of the most common objections when it comes to unit testing is from developers that work on legacy systems where code is hard to test and no tests exist. They’ll say that they’d do things differently if starting from scratch (which usually turns out not to be true), but that there’s just no tackling it now. And this is a valid objection–it can be very hard to get anything under test. But characterization tests at least remove one barrier to testing, which is having extensive experience writing proper unit tests.

With characterization tests, it’s really easy. Just write a unit test that gets in the vicinity of what you want to document, finagle it until it doesn’t throw runtime exceptions, assert something–anything–and watch the test fail. When it fails, make note of what the expected and actual were, and just change the expected to the actual. The test will now pass, and you can move on. Change some method parameters or variables in other classes or even globals–whatever you have access to and can change without collapsing the system.

Through this poking, prodding, and documenting, you’ll start getting a rudimentary picture of what the system does. You’ll also start getting the hang of the characterization test approach (and perhaps unit testing for real as an added bonus). But most importantly, you’ll finally have the beginnings of an automated safety net. There’s no right and wrong per se, but you will start to be able to see when your changes to the system are making it behave differently in ways you didn’t expect. In legacy, different is dangerous, so it’s invaluable to have this notification system in place.

Characterization tests aren’t going to save the day, and they probably aren’t going to be especially easy to write. At times (global state, external dependencies, etc.) they may even be impossible. But if you can get some in place here and there, you can start taking the fear out of interacting with legacy code.

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.

By

Don’t Take My Advice

A Brief History of My Teeth

When it comes to teeth, I hit the genetic jackpot. I’m pretty sure that, instead of enamel, my teeth are coated in some sort of thin layer of stone. I’ve never had a cavity and my teeth in general seem impervious to the variety of maladies that afflict other people’s teeth. So this morning, when I was at the dentist, the visit proceeded unremarkably. The hygienist scraped, prodded, and fluoridized my teeth with a quiet efficiency, leaving me free to do little but listen to the sounds of the dentist’s office.

HygienistThose sounds are about as far from interesting as sounds get: the occasional hum of the HVAC, people a few rooms over exchanging obligatory inanities about the weather, coughs, etc. But for a brief period of time, the hygienist in the next station over took to scolding her patient like a child (he wasn’t). She ominously assured him that his lack of flossing would catch up to him someday, though I quietly wondered which day that would be, since he was pushing retirement age. I empathized with the poor man because I too had been seen to by that hygienist during previous visits.

The first time I encountered her a couple of years ago, she asked me if I drank much soda, to which I replied that I obviously did since my employer stocked the fridge with it for free. I don’t drink coffee, so if I want to partake in the world’s most socially acceptable upper to combat grogginess, it’s Diet Pepsi or Mountain Dew for me. She tsked at me, told me a story of some teenager whose teeth fell out because he had a soda fountain in his basement, and then handed me a brochure with pictures of people’s mouths who were apparently ravaged by soda to such a state where I imagine that the only possible remedy was a complete head transplant. My 31 year steak of complete imperviousness to cavities in spite of drinking soda was now suddenly going to end as my teeth fell inexorably from my head. That is, unless I repented my evil ways and started drinking water and occasionally tea.

I agreed to heed her advice in the way that one generally does when confronted with some sort of pushy nutjob on a mission–I pretended to humor her so that she would leave me alone. When I got home, I threw out the Pamphlet of Dental Doom and didn’t think about her for the next six months until I came back to the dentist. The first thing that she asked when I came in was whether I was still drinking a lot of soda or not. I was so dumbfounded that I didn’t have the presence of mind to lie my way out of this confrontation. I couldn’t decide whether it’d be creepier if she remembered me from six months ago and remembered that I drank a lot of soda or if she had made herself some kind of note in my file. Either way, I was in trouble. The previous time she had been looking to save the sinful, but now she was just pissed. Like the man I silently commiserated with this morning, she proceeded to spin spiteful tales of the world of pain coming my way.

Freud might have some interesting ideas about a person that likes to stick her fingers in other people’s mouths while trying to scare them, and I don’t doubt that there’s a whole host of ground for her to cover on the proverbial therapist couch, but I’ll speak to the low hanging fruit. She didn’t like that I ignored her advice. She is an expert and I am an amateur. She used her expertise to make a recommendation to me, and I promptly blew it off, which is a subtle slap in the face should an expert choose to view it that way.

It Isn’t Personal

It’s pretty natural to feel slighted when you offer expert advice and the recipient ignores it. This is especially true when the advice was solicited. I recall going along once with my girlfriend to help her pick out a computer and feeling irrationally irritated when we got to the store and she saw one that she liked immediately, losing complete interest in my interpretation of the finer points of processor architectures and motherboard wiring schemes. I can also recall it happening at times professionally when people solicit advice about programming, architecture, testing, tooling, etc. I’m happy–thrilled–to provide this advice. How dare anyone not take it.

But while it’s often hard to reason your way out of your feelings, it’s a matter of good discipline to reason past irrational reactions. As such, I strive not to take it personally when my advice, even when solicited, goes unheeded. It’s not personal, and I would argue it’s probably a good sign that the members of your team may be nurturing budding self-sufficiency.

Let’s consider three possible cases of what happens when a tech lead or person with more experience offers advice. In the first case, the recipient attempts to heed the advice but fails due to incompetence. In the second, the recipient takes the advice and succeeds with it. In the third and most interesting case, the recipient rejects the advice and does something else instead. Notice that I don’t subdivide this case into success and failure. I honestly don’t think it matters in the long term.

In the first case, you’re either dealing with someone temporarily out of their depth or generally incompetent, who might be considered an outlier on the lower end of the spectrum. The broad middle is populated with the second case people who take marching orders well enough and are content to do just that. The third group also consists of outliers, but often high-achieving ones. Why do I say that? Well, because this group is seeking data points rather than instructions. They want to know how an expert would handle the situation, not so that they can copy the expert necessarily, but to get an idea. Members of this group generally want to blaze their own trails, though they may at times behave like the second group for expediency.

But this third group consists of tomorrow’s experts. It doesn’t matter if they succeed or fail in the moment because, hey, success is success, but you can be very sure they’ll learn from any failures and won’t repeat their mistakes. They’re learning lessons by fire and experimentation that the middle-of-the-roaders learn as cargo cult practice. And they’re not dismissing your advice to offend you–they earnestly want to understand and assimilate your expertise–but rather to learn from you.

So when this happens to you as a senior team member/architect/lead/etc., try to fight the urge to be miffed or offended and exercise some patience. Give them some rope and see what they do–they can always be reigned in later if they aren’t doing well, but it’s hard to let the rope out if you’ve extinguished their experimental and creative spirit. The last thing you want to be is some woman in a dentist’s office, getting irrationally angry that adults aren’t properly scared of the Cavity Creeps.

By

Subtle Naming and Declaration Violations of DRY

It’s pretty likely that, as a software developer that reads blogs about software, you’ve heard of the DRY Principle. In case you haven’t, DRY stands for “Don’t Repeat Yourself,” and the gist of it is that there should only be one source of a given piece of information in a system. You’re most likely to hear about this principle in the context of copy-and-paste programming or other, similar rookie programming mistakes, but it also crops up in more subtle ways. I’d like to address a few of those today, and, more specifically, I’m going to talk about violations of DRY in naming things while coding.

DRY

Type Name in the Instance Name

Since the toppling of the evil Hungarian regime from the world of code notation, people rarely do things like naming arrays of integers “intptr_myArray,” but more subdued forms of this practice still exist. You often see them appear in server-templated markup. For instance, how many codebases contain text box tags with names/ids like “CustomerTextBox?” In my experience, tons.

What’s wrong with this? Well, the same thing that’s wrong with declaring an integer by saying “int customerCountInteger = 6.” In static typing schemes, the compiler can do a fine job of keeping track of data types, and the IDE can do a fine job, at any point, of showing you what type something is. Neither of these things nor anyone maintaining your code needs any help in identifying what the type of the thing in question is. So, you’ve included redundant information to no benefit.

If it comes time to change the data type, things get even worse. The best case scenario is that maintainers do twice the work, diligently changing the type and name of the variable. The worst case scenario is that they change the type only and the name of the variable now actively lies about what it points to. Save your maintenance programmers headaches and try to avoid this sort of thing. If you’re having trouble telling at a glance what the datatype of something is, download a plugin or a productivity tool for your IDE or even write one yourself. There are plenty of options out there without baking redundant and eventually misleading information into your code.

Inheritance Structure Baked Into Type Names

In situations where object inheritance is used, it can be temping to name types according to where they appear in the hierarchy. For instance, you might define a base class named BaseFoo and then a child of that named SpecificFoo, and a child of that named EvenMoreSpecificFoo. So EvenMoreSpecificFoo : SpecificFoo : BaseFoo. But what happens if during a refactor cycle you decide to break the inheritance hierarchy or rework things a bit? Well, there’s a good chance you’re in the same awkward position as the variable renaming in the last section.

Generally you’ll want inheritance schemes to express “is a” relationships. For instance, you might have Sedan : Car : Vehicle as your grandchild : child : parent relationship. Notice that what you don’t have is SedanCarVehicle : CarVehicle : Vehicle. Why would you? Everyone understands these objects and how they relate to one another. If you find yourself needing to remind yourself and maintainers of that relationship, there’s a good chance that you’d be better off using interfaces and composition rather than inheritance.

Obviously there are some exceptions to this concept. A SixCylinderEngine class might reasonably inherit from Engine and you might have a LoggingFooRetrievalService that does nothing but wrap the FooRetrievalService methods with calls to a logger. But it’s definitely worth maintaining an awareness as to whether you’re giving these things the names that you are because those are the best names and/or the extra coupling is appropriate or whether you’re codifying the relationship into the names to demystify your design.

Explicit Typing in C#

This one may spark a bit of outrage, but there’s no denying that the availability of the var keyword creates a situation where having the code “Foo foo = new Foo()” isn’t DRY. If you practice TDD and find yourself doing a lot of directed or exploratory refactoring, explicit typing becomes a real drag. If I want to generalize some type reference to an interface reference, I have to do it and then track down the compiler errors for its declarations. With implicit typing, I can just generalize and keep going.

I do recognize that this is a matter of opinion when it comes to readability and that some developers are haunted by the variant in VB6 or are reminded of dynamic typing in Javascript, but there’s really no arguing that this is technically a needless redundancy. For the readability concern, my advice would be to focus on writing code where you don’t need the crutch of specific type reminders inline. For the bad memories of other languages concern, I’d suggest trying to be more idiomatic with languages that you use.

Including Namespace in Declarations

A thing I’ve seen done from time to time is fully qualifying types as return values, parameters, or locals. This usually seems to occur when some automating framework or IDE goody does it for speculative disambiguation in scoping. (In other words, it doesn’t know what namespaces you’ll have so it fully qualifies the type during code generation to minimize the chance of potential namespace collisions.) What’s wrong with that? You’re preemptively avoiding naming problems and making your dependencies very obvious (one might say beating readers over the head with them).

Well, one (such as me) might argue that you could avoid namespace collisions just as easily with good naming conventions and organization and without a DRY violation in your code. If you’re fully scoping all of your types every time you use them, you’re repeating that information everywhere in a file that you use the type when just once with an include/using/import statement at the top would suffice. What happens if you have some very oft-used type in your database and you decide to move it up a level of namespace? A lot more pain if you’ve needlessly repeated that information all over the place. Perhaps enough extra pain to make you live with a bad organization rather than trying to fix it.

Does It Matter?

These all probably seem fairly nit-picky, and I wouldn’t really dispute it for any given instance of one or even for the practices themselves across a codebase. But practices like these are death by a thousand cuts to the maintainability of a code base. The more you work on fast feedback loops, tight development cycles, and getting yourself in the flow of programming, the more that you notice little things like these serving as the record skip in the music of your programming.

When NCrunch has just gone green, I’m entering the refactor portion of red-green-refactor, and I decide to change the type of a variable or the relationship between two types, you know what I don’t want to do? Stop my thought process related to reasoning about code, start wondering if the names of things are in sync with their types, and then do awkward find-alls in the solution to check and make sure I’m keeping duplicate information consistent. I don’t want to do that because it’s an unwelcome and needless context shift. It wastes my time and makes me less efficient.

You don’t go fast by typing fast. You don’t go fast, as Uncle Bob often points out, by making a mess (i.e. deciding not to write tests and clean code). You really don’t go fast by duplicating things. You go fast by eliminating all of the noise in all forms that stands between you and managing the domain concepts, business logic, and dependencies in your application. Redundant variable name changes, type swapping, and namespace declaring are all voices that contribute to that noise that you want to eliminate.