It was a weird problem, because I’d been hitting that code base from 3 machines, each with the same version of Visual Studio installed. One of them never had any problems, and the other two always bombed out with this message, forcing me to go into the command line and committing from there, which worked. I have no idea what’s different among these machines and why it would work on one but not the others. Weird.
A long way down in the reply chain, Buck Hodges provided the answer that worked:
@stevehebert@daedtech so, adding the .ide extension to your .gitignore should address the problem on the machines where you are hitting it
Sure enough, I went in and added this to my .ignore file, and it did the trick. This never would have occurred to me, however, as a possible solution. That whole directory and all of the files in it were created by the IDE-Source control apparatus anyway. It’s not like I went in there and created those files, and it’s not like I was using them for anything. I actually would have assumed they would already be in that auto-generated .ignore file. I mean, this is sort of like the IDE saying to me, “I can’t check in your code because this file I created without your knowledge is being used… by me.” (When troubleshooting, I investigated which process was using the file, and the only one was Visual Studio).
Nevertheless, I was grateful for the fast response and the fix that worked, so I won’t look a gift horse in the mouth.
In the last post in this series, I covered the essentials of unit testing without diving into the topic in any real detail. The idea was to offer sort of a guerrilla guide to what unit testing is for those who don’t know but feel they should. Continuing on that path and generating material for my upcoming presentation, I’m going to continue with the introduction.
In the previous post, I talked about what unit testing is and isn’t, what its actual purpose is, and what some best practices are. This is like explaining the Grand Canyon to someone that isn’t familiar with it. You now know enough to say that it’s a big hole in the earth that provides some spectacular views and that you can hike down into it, seeing incredible shifts in flora and fauna as you go. You can probably convince someone you’ve been there in a casual conversation, even without having seen it. But, you won’t really get it until you’re standing there, speechless. With unit testing, you won’t really get it until you do it and get some benefit out of it.
So, Let’s Write that Test
Let’s say that we have a class called PrimeFinder. (Anyone who has watched my Pluralsight course will probably recognize that I’m recycling the example class from there.) This class’s job is to determine whether or not numbers are prime, and it looks like this:
Wow, that’s pretty dense looking code. If we take the method at face value, it should tell us whether a number is prime or not. Do we believe the method? Do we have any way of knowing that it works reliably, apart from running an entire application, finding the part that uses it, and poking at it to see if anything blows up? Probably not, if this is your code and you needed my last post to understand what a unit test was. But this is a pretty simple method in a pretty simple class. Doesn’t it seem like there ought to be a way to make sure it works?
I know what you’re thinking. You have a scratchpad and you copy and paste code into it when you want to experiment and see how things work. Fine and good, but that’s a throw-away effort that means nothing. It might even mislead when your production code starts changing. And checking might not be possible if you have a lot of dependencies that come along for the ride.
But never fear. We can write a unit test. Now, you aren’t going to write a unit test just anywhere. In Visual Studio, what you want to do is create a unit test project and have it refer to your code. So if the PrimeFinder class is in a project called Daedtech.Production, you would create a new unit test project called DaedTech.Production.Test and add a project reference to Daedtech.Production. (In Java, the convention isn’t quite as cut and dry, but I’m going to stick with .NET since that’s my audience for my talk). You want to keep your tests out of your production code so that you can deploy without also deploying a bunch of unit test code.
Once the test class is in place, you write something like this, keeping in mind the “Arrange, Act, Assert” paradigm described in my previous post:
The TestMethod attribute is something that I described in my last post. This tells the test runner that the method should be executed as a unit test. The rest of the method is pretty straightforward. The arranging is just declaring an instance of the class under test (commonly abbreviated CUT). Sometimes this will be multiple statements if your CUTs are more complex and require state manipulation prior to what you’re testing. The acting is where we test to see what the method returns when we pass it a value of 1. The asserting is the Assert.IsFalse() line where we instruct the unit test runner that a value of false for result means the test should pass, but true means that it should fail since 1 is not prime.
Now, we can run this unit test and see what happens. If it passes, that means that it’s working correctly, at least for the case of 1. Maybe once we’re convinced of that, we can write a series of unit tests for a smattering of other cases in order to convince ourselves that this code works. And here’s the best part: when you’re done exploring the code with your unit tests to see what it does and convince yourself that it works (or perhaps you find a bug during your testing and fix the code), you can check the unit tests into source control and run them whenever you want to make sure the class is still working.
Why would you do that? Well, might be that you or someone else later starts playing around with the implementation of IsPrime(). Maybe you want to make it faster. Maybe you realize it doesn’t handle negative numbers properly and aim to correct it. Maybe you realize that method is written in a way that’s clear as mud and you want to refactor toward readability. Whatever the case may be, you now have a safety net. No matter what happens, 1 will never be prime, so the unit test above will be good for as long as your production code is around–and longer. With this test, you’ve not only verified that your production code works now; you’ve also set the stage for making sure it works later.
Resist the Urge to Write Kitchen Sink Tests
When I talked about a smattering of tests, I bet you had an idea. I bet your idea was this:
After all, it’s wasteful and inefficient to write a method for each case that you want to test when you could write a loop and iterate more succinctly. It’ll run faster, and it’s more concise from a coding perspective. It has every advantage that you’ve learned about in your programming career. This must be good. Right?
Well, not so much, counterintuitive as that may seem. In the first place, when you’re running a bunch of unit tests, you’re generally going to see their result in a unit test runner grid that looks something like a spreadsheet. Or perhaps you’ll see it in a report. If when you’re looking at that, you see a failure next to “IsPrime_Returns_False_For_12” then you immediately know, at a glance, that something went wrong for the case of 12. If, instead, you see a failure for “Test_A_Bunch_Of_Primes”, you have no idea what happened without further investigation. Another problem with the looping approach is that you’re masking potential failures. In the method above, what information do you get if the method is wrong for both 2 and 17? Well, you just know that it failed for something. So you step through in the debugger, see that it failed for 2, fix that, and move on. But then you wind up right back there because there were actually two failures, though only one was being reported.
Unit test code is different from regular code in that you’re valuing clarity and the capture of intent and requirements over brevity and compactness. As you get comfortable with unit tests, you’ll start to give them titles that describe correct functionality of the system and you’ll use them as kind of a checklist for getting code right. It’s like your to-do list. If every box is checked, you feel pretty good. And you can put checkboxes next to statements like “Throws_Exception_When_Passed_A_Null_Value” but not next to “Test_Null”.
There are some very common things that new unit testers tend to do for a while before things click. Naming test methods things like “Test_01” and having dozens of asserts in them is very high on the list. This comes from heavily procedural thinking. You’ll need to break out of that to realize the benefit of unit testing, which is inherently modular because it requires a complete breakdown into components to work. If nothing else, remember that it’s, “Arrange, Act, Assert,” not, “Arrange, Act, Assert, Act, Assert, Assert, Assert, Act, Act, Assert, Assert, Act, Assert, etc.”
The gist of this installment is that unit tests can be used to explore a system, understand what it does, and then guard to make sure the system continues to work as expected even when you are others are in it making changes later. This helps prevent unexpected side effects during later modification (i.e. regressions). We’ve also covered that unit tests are generally small, simple and focused, following the “Arrange, Act, Assert” pattern. No one unit test is expected to cover much ground–that’s why you build a large suite of them.
In my career, I’ve sort of drifted like a wraith among various technology stacks and platforms, working on web sites, desktop apps, drivers, or even OS/kernel level stuff. Anything that you might work on has its enthusiasts, its peculiar culture, and its best practices and habits. It’s interesting to bop around a little and get some cross-pollination so that you can see concepts that are truly transcendent and worth knowing. In this list of concepts, I might include Boolean logic, the DRY principle, a knowledge of data structures, the publish/subscribe pattern, resource contention, etc. I think that no matter what sort of programmer you are, you should be at least aware of these things as they’re table stakes for reasoning well about computer automation at any level.
Add REST services to that list. That may seem weird when compared with the fundamental concepts I’ve described above, but I think it’s just as fundamental. At its core, REST embodies a universal way of saying, “here’s a thing, and here’s what you can do to that thing.” When considered this way, it’s not so different from “DRY,” or “data structures,” or “publish/subscribe” (“only define something once,” “here are different ways to organize things,” and, “here’s how things can do one way communication,” respectively). REST is a powerful reasoning concept that’s likely to be at the core of our increasing connectedness and our growing “internet of things.”
So even if you write kernel code or Winforms desktop apps or COBOL or something, I think it’s worth pausing, digressing a little, and understanding a very shallow-dive into how this thing works. It’s worth doing once, quickly, so you at least understand what’s possible. Seriously. Spend three minutes doing this right now. If you stumbled across this on google while looking for an introduction to Web API, skim no further, because here’s how you can create your very own REST endpoint with almost no work.
With just these two tools, you’re going to create a REST web service, run it in a server, make a valid request, and receive a valid response. This is going to be possible and stupid-easy by virtue of a framework called Web API.
Fire up Visual Studio and click “New->Project”.
Select “Web” under Visual C# and then Choose “ASP.NET MVC 4 Web Application”
Now, choose “Web API” as the template and click “OK” to create the project.
You will see a default controller file created containing this class:
Hit F5 to start IIS express and your web service. It will launch a browser window that takes you to the default page and explains a few things to you about REST. Copy the URL, which will be http://localhost:12345, where 12345 is your local port number that the server is running on.
Now launch fiddler and paste the copied URL into the URL bar next to the dropdown showing “GET” and add api/values after it. Go to the request header section and add “Content-Type: application-json” and press Execute (near top right)
(Note — I corrected the typo in the screenshots after I had already taken and uploaded them)
A 200 result will appear in the results panel at the left. Double click it and you’ll see a little tree view with a JSON and “value1” and “value2” under it as children. Click on the “Raw” view to see the actual text of the response.
What you’re looking at is the data returned by the “Get()” method in your controller as a raw HTTP response. If you switch to “TextView”, you’ll see just the JSON [“value1″,”value2”] which is what this thing will look like to most JSON-savvy parsing tools.
So what’s all of the fuss about? Why am I so enamored with this concept?
Well, think about what you’ve done here without actually touching a line of code. Imagine that I deployed this thing to https://www.daedtech.com/api/values. If you want to know what values I had available for you, all you’d need to do is send a GET request to that URL, and you’d get a bare-bones response that you could easily parse. It wouldn’t be too much of a stretch for me to get rid of those hard-coded values and read them from a file, or from a database, or the National Weather Service, or from Twitter hashtag “HowToGetOutOfAConversation,” or anything at all. Without writing any code at all, I’ve defined a universally accessible “what”–a thing–that you can access.
We generally think of URLs in the context of places where we go to get HTML, but they’re rapidly evolving into more than that. They’re where we go to get a thing. And increasingly, the thing that they get is dictated by the parameters of the request and the HTTP verb supplied (GET, POST, etc.–I won’t get too detailed here). This is incredibly powerful because we’re eliminating the question “where” and decoupling “how” from “what” on a global scale. There’s only one possible place on earth you can go to get the Daedtech values collection, and it’s obviously at daedtech.com/api/values. You want a particular value with id 12? Well, clearly that’s at daedtech.com/api/values/12–just send over a GET if you want it. (I should note you’ll just 404 if you actually try these URLs.)
So take Web API for a test drive and kick the tires. Let the powerful simplicity of it wash over you a bit, and then let your mind run wild with the architectural possibilities of building endpoints that can talk to each other without caring about web server, OS, programming language, platform, device-type, protocol setup and handshaking, or really anything but the simple, stateless HTTP protocol. No matter what kind of programming you do at what level, I imagine that you’re going to need information from the internet at some point in the next ten years–you ought to learn the basic mechanics of getting it.
By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.
I was setting up to give a presentation the other day when it occurred to me that Nuget was the perfect tool for my needs. For a bit of background, I consider it of the utmost importance to tell a story while you present. Things I’m not fond of in presentations include lots of slides without any demonstration and snapshots of code bases in various stages of done. I prefer a presentation where you start with nothing (or with whatever your audience has) and you get to some endpoint, by hook or by crook. And Nuget is an excellent candidate for whichever of “hook” and “crook” means “cheating.” You can get to where you’re going by altering your solution on the fly as you go, but without all of the umming and hawing of “okay, now I’ll right click and add an assembly reference and, oh, darnit, what was that fully qualified path again, I think–oops, no not that.”
In general, I’ve decided to set up an internal Nuget feed for some easier ala carte development, so this all dovetails pretty nicely. It made me stop and think that, given how versatile and helpful a tool this is, I should probably document it for anyone just getting started. I’m going to take you through creating the simplest imaginable Nuget package but with the caveat that this adds a file to your solution (as opposed to other tutorials you’ll see that often involve marshaling massive amounts of assembly references or something). So, here are the steps (on Windows 7 with VS2012):
Go to codeplex and download the package explorer tool.
Install the package explorer, launch it, and choose “Create a New Package”:
In the top left, click the “Edit Metadata” icon as shown here:
Fill out the metadata, ensuring that you fill things in for the bold (required) fields:
Click the green check mark to exit the metadata screen and then right-click in the “Package Contents” pane. Select “Add Content Folder.” The “Content” folder is a structure of files that will mimic what Nuget puts in your solution.
Now, right click on the “Content” folder and select “Add New File.” Name the file “Readme.txt” and add a line of text to it saying “Hello.” Click the save icon and then the “back” arrow next to it.
Now go to the file menu at the top and select “Save,” which will prompt you for a file location. Choose the desktop and keep the default naming of the file with the “nupkg” extension. Close the package explorer.
Create a new solution in Visual Studio into which we’re going to import the Nuget package.
Go to Tools->Library Package Manager->Manage Packages for Solution to launch the Nuget GUI and click “Settings” in the bottom right because we’re going to add the desktop as a Nuget feed:
In this screen, click the plus icon (pointed to by the arrow) and then give the package a name and enter the file path for the desktop by way of configuration:
Click “Ok” and you should see your new Nuget feed listed underneath the official package source. If you click it, you should see your test package there:
Click “Install” and check out what happens:
Observe that source (or text or whatever) files are installed to your solution as if they were MSI installs. You can install them, uninstall them, and update them all using a GUI. Source code is thus turned into a deliverable that you can consume without manual propagation of the file or some kind of project or library include of some “CommonUtils” assembly somewhere. This is a very elegant solution.
It shouldn’t take a ton of extrapolation to see how full of win this is in general, especially if you create a lot of new projects such as a consulting shop or some kind of generalized support department. Rather than a tribal-knowledge mess, you can create an internal Nuget feed and setup a sort of code bazaar where people publish things they think are useful; tag them with helpful descriptors; and document them, allowing coworkers to consume the packages. People are responsible for maintaining and helping with packages that they’ve written. Have two people do the same thing? No worries–let the free market sort it out in terms of which version is more popular. While that may initially seem like a waste, the group is leveraging competition to improve design (which I consider to be an intriguing subject, unlike the commonly embraced anti-pattern, “design by committee,” in which people leverage cooperation to worsen design).
But even absent any broader concerns, why not create a little Nuget feed for yourself, stored in your Dropbox or on your laptop or whatever? At the very least, it’s a handy way to make note of potentially useful things you’ve written, polish them a little, and save them for later. Then, when you need them, they’re a click away instead of a much more imposing “oh, gosh, what was that one thing I did that one time where I had a class kinda-sorta like this one…” away.
I’m not particularly interested in marketing principles in the commercial sense of the word (though I find the psychology of argumentation and persuasion to be fascinating), so please excuse any failed parallelism in advance. Today, I want to talk about the concept of mind share, but to apply it to the life of a work-a-day developer.
For those not familiar with the concept, mind share is the awareness that a consumer has about a particular product. For instance, if I say “smart phone”, the first things that pop into your head are probably “iPhone”, “Android”, “Blackberry”, perhaps in exactly that order. If that’s the case, iPhone has a larger mindshare from your perspective as a consumer than Blackberry or Android.
Another concept that comes into play is referred to in the linked wikipedia article as “evoked set”. This refers to the set of items that you’ll think of at all without some kind of researching or prompting. In our example above, you didn’t think of Windows Mobile, and now that you read the name, you probably think, “oh yeah, them.” If that’s the case, your evoked set is the first three, and Windows Mobile is out in the cold.
But let’s come back to this later.
A Modest Proposal
The other day, I happened to overhear the substance of a code review. The code was some relatively minor set of changes, and so the suggested fixes and changes were also relatively minor and unremarkable, but with an interesting exception because of its newness to me. The reviewer requested that the developer use the Visual Studio utilities “Sort Usings” and “Organize Usings”. For those not familiar with .NET, this is the Java equivalent of sorting your package imports or the C++ equivalent of sorting/organizing #includes. The only difference is that in C#/.NET, this is functionally useless from the compiled code perspective. That is, C# took a lesson from its counterparts and had its compiler take care of this housekeeping. From a developer’s point of view, this only potentially has ramifications in terms of additional intellisense overhead.
Still, this struck me on the surface as a good practice, albeit one I had never really considered. I suppose that unused using statements are a form of dead code, having intellisense perform better is a mild plus, and sorting them is probably… nice, I guess… for those who ever inspect the using statements. I am not one of those people — I never write them because of Ctrl-Period, and I never look at them. I used to remove them because of the CodeRush issues list for a file, but I tend to turn that off since it tends to unceremoniously remove the Linq extension methods and leave me with non-compiling code (fingers crossed for a fix in a future version).
Back to the story, the reviewer then went on to state that this would be required to ‘pass’ any future code reviews that he did. In spite of the apparent tiny benefit conferred by this practice, something about this proclamation seemed a little off and problematic to me. But, it slipped out of my mind in favor of more pressing matters until I was going through the process of promoting some code in a different scenario the other day, and suddenly, the unfocused nagging issue leaped into full view for me.
Anatomy of a Code Promotion
Generally speaking, a developer’s task is a simple one: implement features, fix any defects. So, if you’re given a task to implement, you implement it, take a moment to pat yourself on the back and move on. And, that’s what I did during my epiphany. Except, er no, wait.
I was using Rational Clear Case, so what I actually had to do was finish the change, and then check in my code. From there, to promote it, I had to open up Clear Case explorer, find my view, right click, and say “Deliver from Stream to Default”. From there, I had to launch Clear Case Project Explorer, find the integration stream, and click “Make a Baseline”. The policy is to name the baseline the default appended with an underscore and my login name. After that, I had to recommend the baseline. Ugh (and double Ugh for Clear Case as source control). Suddenly my life as a developer is not so simple. That’s no longer a one step process, but some number greater than one depending on our standards for granularity.
But wait, crap. I didn’t run all the unit tests to make sure the build wasn’t broken (actually, I tend to be fanatical about that, but I’m making a rhetorical point). I also didn’t run style cop to make sure I was conforming to the set of coding standard we have, nor did I run my other static analysis tools to check for simple mistakes. Alright, so time to do all that, and re-deliver.
But wait. Clear Case forces a rebase operation prior to code delivery (the equivalent of SVN update). And, it’s generally good practice to run all of your tests and analysis tools prior to and after a rebase to make sure that you know whether you are responsible for any broken tests, standards violations, etc or whether you inherited them. Man, this is getting intense.
So alright, promotion process is check your code for correctness, run all tests, run all static analysis. Then, rebase and do all that again. Then, follow that whole rigmarole about delivery and making baselines. My goodness — I haven’t even considered that I might have forgotten to add a file, so I should probably grab a clean copy of everything from source control and rebuild and, if anything breaks, re-deliver. And I haven’t even mentioned the possibility of handling merge conflicts.
Oh, and I now need to sort and organize my using statements. That seemed like a decent idea a few paragraphs ago, but now…
(I realize that there are optimizations that could be made to this particular process — different source control, continuous integration, etc. Point is, just about every process has some warts and, even if it doesn’t, managing concurrent changes and standards in a group environment requires more thought than we realize as we get accustomed to the process.)
In the face of all of this stuff, the mindshare metaphor begs consideration. If fixing our defects and implementing our features isn’t the iPhone, we’re in big trouble. From there, running unit tests and static analysis tools probably ought to be Android and Blackberry, but they may get pushed out a bit in favor of the particulars of wrangling the source control system and resolving merge conflicts, depending on the source control system and merge tool.
As we add more things, we have two options. We can either reduce the mind share of existing things in our evoked set, or we can spend time and energy expanding our evoked set. So, if we want to hold our efficiency of feature implementation constant, we’re going to have to leave some things out of our mindshare (and then perhaps be reminded of them at code reviews or with exasperated emails from team members with different evoked sets than ours, which we trade for exasperated emails of our own at things missing from their evoked sets). Alternatively, if we want to expand our mindshare, it’s going to come at the cost of a steep learning curve for all newer members and decreased efficiency across the board as we go through our rote checklist prior to each delivery.
Getting It Right
I don’t care for either of these options. So, I have two suggestions for people as the number of sticky notes and strings around our fingers grows in order to promote code.
Don’t sweat the small stuff.
Automate as much as possible.
In the case of “organize and sort usings”, I’d offer item (1). Something that provides no benefit to the end-product and questionable benefit to the development environment is something that ought not to occupy our mindshare as developers. But, in case I am just flat out wrong in my assessment of the benefit/detriment analysis, I’d offer option (2). Given that this is already implemented in Visual Studio, a small plugin running on the build machine could ensure that the using statements in all checked in code are always optimized, without adding to the maze of things developers have to remember.
And, to expand on this, I’d suggest that we in general move as many things into the (2) camp as possible, if we value them. Things like coding standards, static analysis, best practices, etc do matter, so why not force them with automatic, gated checkins or code transforms on the build machine. That ensures they’re always right, and without forcing up front memorization and, more importantly, without distraction from the most important problem — “implement features and fix defects”. The closer to 100% of our mindshare that iPhone occupies, the better for all project stakeholders.