DaedTech

Stories about Software

By

Betrayed by Your Test Runner

The Halcyon Days of Yore

I was writing some code for Apex this morning and I had a strange sense of deja vu. Without going into painful details, Salesforce is a cloud-based CRM solution and Apex is its proprietary programming language that developers can use to customize their applications. The language is sort of reminiscent of a stripped-down hybrid of Java and C#. The IDE you use to develop this code is Eclipse, armed with an Apex plugin.

The deja vu that I experienced transported me back to my college days working in a 200 oldschoollinuxlevel computer systems course where projects assigned to us were the kind of deal involving profs/TAs writing 95% of the code and we filled in the other 5%. I am always grateful to my alma mater for this since one of the things most lacking in university CS education is often concepts like integration and working on large systems. In this particular class, I was writing C code in pico and using a makefile to handle compiling and linking the code on a remote server. This generally took a while because of network latency, server business, it being 13 years ago, a lot of files to link, etc. The end result was that I would generally write a lot of code, run make, and then get up and stretch my legs or get a drink or something, returning later to see what had happened.

This is what developing in Apex reminds me of. But there’s an interesting characteristic of Apex, which is that you have to write unit tests, they have to pass, and they have to cover something like 70% of your code before you’re allowed to run it on their hardware in production. How awesome is that? Don’t you sometimes wish C# or Java enforced that on your coworkers that steal in like ninjas and break half the things in your code base with their checkins? I was pumped when I got this assignment and set about doing TDD, which I’ve done the whole time. I don’t actually know what the minimum coverage is because I’ve been at 100% the entire time.

A Mixed Blessing?

One of the first things that I thought while spacing out and waiting for compile was how much it reminded me of my undergrad days. The second thing I thought of, ruefully, was how much better served I would have been back then to know about unit tests or TDD. I bet that could have same me some maddening debugging sessions. But then again, would I have been better off doing TDD then? And, more interestingly, am I better off doing it now?

Anyone who follows this blog will probably think I’ve flipped my lid and done a sudden 180 on the subject, but that’s not really the case. Consider the following properties of Apex development:

  1. Sometimes when you save, the IDE hangs because files have to go back to the server.
  2. Depending on server load, compile may take a fraction of a second or up to a minute.
  3. It is possible for the source you’re looking at to get out of sync with the feedback from the compiling/testing.
  4. Tests in a class often take minutes to run.
  5. Your whole test suite often takes many, many minutes to run.
  6. Presumably due to server load balancing, network latency and other such factors, feedback time appears entirely non-deterministic.
  7. It’s normal for me to need to close Eclipse via task manager and try again later.

Effective TDD has a goal of producing clean code that clearly meets requirements at the unit level, but it demands certain things of the developer and the development environment.  It is not effective when the feedback loop is extremely slow (or worse, inaccurate) since TDD, by its nature, requires near constant execution of unit tests and for those unit tests to be dependable.

Absent that basic requirement, the TDD practitioner is faced with a conundrum.  Do you stick to the practice where you have red (wait 2 minutes), green (what was I doing again, oh yeah, wait 3 minutes), refactor (oops, I was reading reddit and forgot what I was doing)?  Or do you give yourself larger chunks of time without feedback so that you aren’t interrupted and thrown out of the flow as often?

My advice would be to add “none of the above” to the survey and figure out how to make the feedback loop tighter.  Perhaps, in this case, one might investigate a way to compile/test offline, alter the design, or to optimize somehow.  Perhaps one might even consider a different technology.  I’d rather switch techs than switch away from TDD, myself.  But in the end, if none of these things proves tenable, you might be stuck taking an approach more like one from 20+ years ago: spend a big chunk of time writing code, run it, write down everything that went wrong, and trying again.  I’ll call this RDD — restriction driven development.  I’d say it’s to be avoided at all costs.

I give force.com an A for effort and concept in demanding quality from developers, but I’d definitely have to dock them for the implementation since they create a feedback loop that actively discourages the same.  I’ve got my fingers crossed that as they expand and improve the platform, this will be fixed.

By

Practical Programmer Math: From Booleans to Bits to Bytes

The “Why This Should Matter to You” Story

A few scenarios today:

  1. You don’t understand why the answer to “how many bytes are in a kilobyte” isn’t 1000 (at least, not exactly).
  2. You see if((x & mask) > 7) and wonder why it compiles despite the author forgetting an ‘&’
  3. You feel a little ashamed because in spite of the fact that your non-programmer friends say that you “think in ones and zeroes”, you don’t actually understand how your code becomes ones and zeroes.

Everything here is symptomatic of a lack of understanding of the way data is represented at the most fine grained level in computer science and in programming.
It’s possible to understand some programming concepts without understanding the mechanics of how they work internally. After all, you can understand that a list.sort() method without understanding the sort algorithm being used.

So the general story is that your understanding of the data and data structures that go into programming breaks down at some level. You may understand that a string is a series of characters, but perhaps not that characters are actually integers or that integers are actually bytes or that bytes are actually series of bits or that bits are simply Booleans. And that breakdown in understanding somewhere in the chain leads to situations where you don’t fully understand what your peers are talking about.

Math Background

In the last two posts in this series, I’ve gone through some basic but important points about Boolean algebra, which one might describe as the basic mathematics of truth values. In this system there are two fundamental values to which constants, variables and expressions might evaluate: true and false. Boolean algebra is, in a very real sense, the absolute fundamental building block for everything that we as programmers do, but it may not be entirely obvious as to how.

To get there, let’s leave behind the syntax of Boolean algebra and pick up the syntax of machine language. So forget about “true” and “false” and consider instead 1 and 0, respectively. 1 is true, 0 is false. And further let’s get rid of the conjunction, disjunction and negation operators and consider, in their stead, the more familiar & | ~ operators. (You’ll notice that these are single operators and not the double operators with which you are likely familiar and this is not a typo — there is a method to my madness.) All of the rules from the previous posts still apply so, for instance, the identity law of A | 0 = A applies.

So far we’ve just swapped syntactical operators and constants. But let’s now consider an interesting property of basic arithmetic (not Boolean arithmetic) which is that written numbers are really a sequence of answers to a pre-arranged sequence of questions.

Wat

Okay, let me clarify with an example. Let’s pick a three digit number like 481. There’s nothing particularly special about this number, but let’s think of it in terms of a Q&A session. It goes like this “how many hundreds are in this number?” followed by the answer “four.” “How many tens are in this number?” “Eight.” “And how many ones?” “One.” This is really all there is to numerical representation of natural numbers (discounting the concept of zero, negative numbers, decimals, etc).

In all numbers that we’re comfortable with in normal society, the question always involves powers of 10: 1, 10, 100, etc. And so the answer is always a number 0 through 9. But what if we asked the question in a different way? What if we did it in terms of 7? Or 34? Or… 2? The answer, respectively, would always be an answer 0 through 6, 33… and 1. That last one is most interesting to us here. The answer will always be between 0 and 1 or, put another way, the answer will always be 0 or 1. Interesting.

Let’s look at an example. The number 10110, using powers of 2, is asking the question, “how many 16’s, 8’s, 4’s, 2’s and 1’s are there” to which the answers come as “1, 0, 1, 1, 0” respectively. We have 1 sixteen, no eights, 1 four, 1 two and no ones for a total of 22. 10110 is the binary representation of 22. That’s probably not new to you regardless of your CS/math/etc background if you’re a programmer. But what might be new to you is just how fundamental this concept is because it allows translation between physical and “soft”-ware concepts. How so? Because you know what’s actually pretty hard to represent and measure physically? 0 through 9. But you know what isn’t? Booleans. On or off. There or not. Current flowing or nothing. Chad punched or not. Magnetic surface raised or flat. Ion charged or neutral. You get the idea.

We can build computers where information is stored as a series of Booleans representing not “true or false”, but “yes or no” and we can still apply all of the same principles, rules, equivalences and concepts, albeit with slightly different semantics. We can take these “yes or no” values and assemble them into integers using our binary question schemes. We can them assemble those integers into ASCII characters, strings, basic programs, complex programs, multi-media files, and you know the rest.

The math behind all of programming is simple. The basis for the whole thing is simply gigantic sequences of Boolean values assembled into ever-larger pieces. The marriage of Boolean and counting unique to binary encoding means that you know how to assemble the Boolean sequences into meaningful information and also how to manipulate information and store it back, performing operations using identity, domination, negation, and other equivalences (we will get to some interesting optimizations using bit-shifting and other techniques in future posts).

The only missing piece here is how we get from these individual “1 or 0” values, called “bits” to larger values. Well, most of this is arbitrary. For instance, the definition of “byte” is simply “8 consecutive bits,” meaning that a byte can represent one of 256 possible integer values. These different values can also be assigned (again arbitrarily) to non-integral values such as characters. This is the basis for ASCII, a near-universal standard for representing text using integers (and thus binary/bits). This is the mechanism by which the building blocks are assembled.

How it Helps You

In a (hyphenated) word: background-knowledge. For instance, with all of this in mind, it’s relatively easy to get to solutions for the mysteries at the beginning of the post. First up is the weirdness surrounding bit counting using KB, MB, GB, etc. Is a kiloybyte a thousand bytes the way a kilogram is a thousand grams? Well, it could be, if you define it the way we’re used to seeing those terms defined. But that isn’t actually what winds up happening. What happens instead is the result of an interesting quirk of powers of two. As it turns out, 2 to the 10th is 1,024, which is roughly a ‘kilo’ of bytes. 2 to the 20th is 1,048,576, which is roughly a ‘mega’ (1 million) of bytes. Same kind of “fuzzy” counting happens with Giga, Tera, etc, all the way up the chain with 2 to the 30th, 40th, etc. So why reuse a prefix that means something else — something otherwise very precise? Personally, I think the adopters of this convention were being cute, and I don’t care for it at all (*dons flame retardant suit), but them’s the breaks. Apparently, I’m not the first to feel this way and others have tried and are trying to do something about it.

What about all that bit mask stuff? Well that’s now pretty simple to sort out as well. One common convention that you need for a bit of additional perspective is to know that “base-16” is often adopted for representing bytes. The reason is that a byte is 8 bits and each 4 bits represents one of 16 possible values. So if you have a base-16 number, bytes can be written with two values. These take the form of something like 4E because you get 0-9 of base-10 that you’re used to and then another 6 values representing 10 through 15 which are written as the characters A through F. It’s much easier to remember bytes as two values than it is to remember 16 bits. I mention all of this because “FF” is identical in value to “11111111” and “F0” is identical to “11110000”.

You don’t see bit masking as much in managed languages, though you might if you get your hands dirty down in the transport layer doing things like socket management. But those coming from a C background are quite familiar with it. If you get a message over the wire, it may come in as a stream of bytes. But when things like network latency and accuracy are issues (or perhaps in embedded environments when space is at a premium) there is a tendency to squeeze every last modicum of value out of the space available. In a web application, representing a number 1 through 10 with data type “int” (which in managed languages will be a 32-bit/4-byte integer) is not really an issue, but in the environments I’ve mentioned, it’s digital blasphemy to waste that space. You only need 4 bits to represent that — the other 28 are wasted.

But the smallest unit typically dealt with in programming convention is the byte. So what to do? Well, enter bit masking. Let’s say I have two speakers on some embedded device that can be set to volume level 1-10 and I want to write a protocol for querying volume level for both. What I would do is define a message that was 1 byte long and have the first 4 bits of the byte represent the volume of one speaker and the second 4 represent the volume of the other. If I wanted just the value of speaker 2, I would want a way to preserve the first 4 bits of the byte I got while clobbering the second 4. Well, enter the bit mask. If I perform the operation int myValue = 0x0F & incomingSpeakerByte, I get what I’m looking for. Why? Well, let’s consider this.

The value 0x0F can be rewritten as 0x00001111 and the value of the speakers is whatever is coming back. Let’s say that it’s set to volume level 9, which can be rewritten as 0x1001. But, let’s say that the other speaker is set to volume level 4, which is 0x0100. That means that both speakers together are represented with the byte 0x01001001. If I set a local variable (where space is not a premium) to the byte I’m getting over the wire, it will be equal to 73, and that is nonsense in our 1-10 volume domain. But the bit mask operation performs the operation 0x00001111 & 0x01001001 to get the final value of 9 (0x00001001) correctly. The reason this works is that the “&” operation aligns the two sequences of bits and performs AND on them one by one, recording the value. This is called “bitwise and.” If you think back to the domination and identity laws from the last post, we’re using domination (x & 0 = 0) to clobber the first 4 bits and identity (x & 1 = x) to preserve the second 4, which are the only four that interest us. (Storing the other byte in a local int is a bit more involved, taking advantage of the aforementioned “bit-shifting,” and beyond the scope of this post). The long and short of it is that when you see bit masking operations performed, you now know what’s going on — the developer is using the same byte to store multiple values and using this technique to extract the ones he or she needs at the moment.

As to thinking in ones and zeroes, you might not be quite there yet, but this is a step in the right direction. It helps you to understand the fundamental building blocks of software based data and how we get from the physical world to the “soft” representation that we deal with. It also hopefully makes some of the weird-symbol bit manipulation tricks seem a little less like black magic. At the very least, I hope that instead of viewing some of this stuff with suspicion or hopelessness, you now have some basic concepts, terms to google and questions to ask.

Further Reading

  1. Wikipedia on bitwise operations
  2. HowStuffWorks on bits and bytes
  3. How many x in a y calculator
  4. Wiki on basic bit manipulation

By

A Blog Grows Up: DaedTech Year in Review for 2012

Setting the Stage

Oh, how naive I was even 12 months ago (and I have no doubt 12 months from now I’ll be saying the same thing). But before I get to that, I’ll travel back in time a little further.

The year was 2010 and I had just purchased the domain name daedtech.com along with a hosting plan. I was finishing up my MS in computer science via night school and realized that (1) I would have a lot of free time I didn’t used to have and (2) I would miss having to write and think critically about programming and software in a way that went beyond my 9-5 work. So the DaedTech brand grew out of a decision to use that spare time to freelance and the DaedTech blog grew out of writings and ramblings I had lying around anyway and the desire to keep writing.

Why am I talking about 2010? Because 2010’s decision gave rise to the 2011 approach to blogging that I had, which was to write a post, publish it, sit back and say “alright interwebs, come drink from the fountain of my insight.” There were a lot of crickets in 2011, needless to say. My blog was really more of a personal journal that happened to be publicly displayed. 2012 was the year I figured out that there was more to blogging than simply generating content.

What’s Happened This Year

If blogging isn’t just about generating content, then what is it about?  I’d say it’s about generating content and then taking the responsibility for getting that content in front of people who are interested in seeing it.  It’s not enough simply to toss some text over the wall — you have to make it visually appealing  (or at least approachable), engaging, accessible, and interactive.  The most successful blog posts are ones that start, rather than end, conversations because they resonate with the community and encourage discussion and further research.

The following is a list of changes I made to the blog and to my approach to blogging this past year, and the results in terms of readership growth and traffic have been pronounced.

  • Installed Google Analytics in order to have granular, empirical data about visitors to different parts of the site
  • Added interactive social media buttons to allow people to like/plus/tweet/etc posts they liked.
  • Made it easier to subscribe to posts via RSS.
  • Overhauled the category and tag scheme.
  • Started announcing new posts via social media.
  • Adopted the practice of writing posts ahead of time and publishing them with a regular cadence (Monday, Wednesday, Friday) instead of popping them off whenever I felt like it.
  • Routinely participated in discussions/comments on others’ blogs instead of just reading them.
  • Introduced Disqus to manage my comments.
  • Enlisted the help of a copy editor.
  • Improved the speed/performance of the site.
  • Switched from feedburner to feedblitz for RSS subsriptions.
  • Developed and/or fleshed out recurring post series (design patterns, practical math, home automation, abstractions).
  • Adopted the practice of routinely including images, code snippets or both to break up monotonous text.

These actions (and probably to some degree just being around longer) have yielded the following results:

  • RSS subscribers have more than tripled.
  • Average daily visits have increased by about 300%
  • Page Rank has increased from 1 to 3.
  • Trackbacks and mentions from other blogs are routine as compared to previously nonexistent.
  • Comments per post average is up a great deal.
  • I now receive posting requests.
  • DaedTech posts have been ‘syndicated’ on Portuguese (Brazil) and French language sites.
  • Referral traffic now frequently comes from sites like Hacker News and reddit.

As far as being a programmer goes, I’ve increased my experience slightly in the last year. After all, having spent the last 14 years writing code isn’t all that much different than having spent the last 13 years writing code. But having been a blogger for 2 years is much different than having been a blogger for 1 — at the risk of overconfidence, I think I’m  starting to get the hang of this thing to some extent.

Lessons Learned

I’ve contemplated for a while doing a post along the lines of “So You Want to be a Dev Blogger,” but have held off, largely because of a feeling along the lines of the one Scott Hanselman describes in his post about being a phony. I may still do a post like that, but I think this is largely that post, framed in terms of what I’ve learned and how it’s humbling to look back at my own naivete rather than “prepare to start gathering the pearls of wisdom that I’m going to drop on you.”

The lessons that I’ve learned and hope to keep applying all come back to the idea that there’s so much more to blogging than simply knowing about programming or being able to write about that knowledge. There are small lessons from a whole smattering of disciplines to be woven in: UX, marketing, SEO, psychology, etc. You don’t need to be an expert in any of these things, but you need at least to be nominally competent. You also need to do a lot of looking around at successful people to understand what they do.  It was by doing this and by talking to other bloggers that I figured out the wisdom of various ideas like all of the social media buttons and the Disqus commenting system.  None of these things is rocket science and they’re certainly within any aspiring blogger’s realm of capability, but a lot of them have that kind of “man, I never would have thought about that” air to them.

Fun Facts

Below are my most popular posts of 2012, and you can see that there is nearly a dead heat between posts that were popular and read a lot when written and posts that draw a lot of google hits:

  1. Casting is a Polymorphism Fail
  2. How to Keep Your Best Programmers
  3. WPF and Notifying Property Changed
  4. How Developers Stop Learning: Rise of the Expert Beginner
  5. Static and New are Like Inline
  6. Adding a Google Map to Android Application

Here are the countries in which DaedTech is most popular:

  1. USA
  2. United Kingdom
  3. India
  4. Germany
  5. Canada
  6. Australia
  7. France
  8. Belgium
  9. Netherlands
  10. Poland

Here are the sources of the most referrals:

  1. Twitter
  2. reddit
  3. Facebook
  4. Hacker News
  5. Disqus
  6. LinkedIn
  7. Instapaper
  8. Stack Overflow
  9. Google+
  10. Stack Exchange

Last and Not Least

It’s fun to reflect back on the lessons that I’ve learned and the fun that I’ve had blogging. It’s always interesting to look at statistics about, well, anything if you’re a stat-head and analytics nut like me. But the most important thing, and arguably the only thing, that makes a blog is the readership. And so I’d like to take this opportunity while being reflective to sincerely thank you for reading, tweeting, commenting, forwarding, or really even just glancing at the blog every now and then. With all of my changes that I’ve listed above, I’ve set the stage to make readership easier, but it is really you and your readership that are the difference between DaedTech as it exists now and the site as it existed in early 2011 when I was speaking only to an empty room and comments SPAM bots. So once again, thank you, and may you have a Happy New Year and a great 2013!

By

Just Starting with JustMock

A New Mocking Tool

In life, I feel that it’s easiest to understand something if you know multiple ways of accomplishing/using/doing/etc it. Today I decided to apply that reasoning to automatic mocking tools for .NET. I’m already quite familiar with Moq and have posted about it a number of times in the past. When I program in Java, I use Mockito, so while I do have experience with multiple mocking tools, I only have experience with one in the .NET world. To remedy this state of affairs and gain some perspective, I’ve started playing around with JustMock by Telerik.

There are two versions of JustMock: “Lite” and “Elevated.” JustMock Lite is equivalent to Moq in its functionality: able to mock things for which their are natural mocking seems, such as interfaces, and inheritable classes. The “Elevated” version provides the behavior for which I had historically used Moles — it is an isolation framework. I’ve been meaning to take this latter for a test drive at some point since the R&D tool Moles has given way to Microsoft “Fakes” as of VS 2012. Fakes ships with Microsoft libraries (yay!) but is only available with VS ultimate (boo!).

My First Mock

Installing JustMock is a snap. Search for it in Nuget, install it to your test project, and you’re done. Once you have it in place, the API is nicely discoverable. For my first mocking task (doing TDD on a WPF front-end for my Autotask Query Explorer), I wanted to verify that a view model was invoking a service method for logging in. The first thing I do is create a mock instance of the service with Mock.Create<T>(). Intuitive enough. Next, I want to tell the mock that I’m expecting a Login(string, string) method to be called on it. This is accomplished using Mock.Arrange().MustBeCalled(). Finally, I perform the actual act on my class under test and then make an assertion on the mock, using Mock.Assert().

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Execute_Invokes_Service_Login()
{
var mockService = Mock.Create();
Target = new LoginViewModel(mockService) { Username = "asdf", Password = "fdsa" };
Mock.Arrange(() => mockService.Login("asdf", "fdsa")).MustBeCalled();
Target.Login.Execute(null);

Mock.Assert(mockService);
}

A couple of things jump out here, particularly if you’re coming from a background using Moq, as I am. First, the semantics of the JustMock methods more tightly follow the “Arrange, Act, Assert” convention as evidenced by the necessity of invoking Arrange() and Assert() methods from the JustMock assembly.

The second thing that jumps out is the relative simplicity of assertion versus arrangement. In my experience with other mocking frameworks, there is a tendency to do comparably minimal setup and have a comparably involved assertion. Conceptually, the narrative would be something like “make the mock service not bomb out when Login() is called and later we’ll assert on the mock that some method called login was called with username x and password y and it was called one time.” With this framework, we’re doing all that description up front and then in the Assert() we’re just saying “make sure the things we stipulated before actually happened.”

One thing that impressed me a lot was that I was able to write my first JustMock test without reading a tutorial. As regular readers know I consider this to be a strong indicator of well-crafted software. One thing I wasn’t as thrilled about was how many overloads there were for each method that I did find. Regular readers also know I’m not a huge fan of that.

But at least they aren’t creational overloads and I suppose you have to pay the piper somewhere and I’ll have either lots of methods/classes in Intellisense or else I’ll have lots of overloads. This bit with the overloads was not a problem in my eyes, however, as I haven’t explored or been annoyed by them at all — I just saw “+10 overloads” in Intellisense and thought “whoah, yikes!”

Another cool thing that I noticed right off the bat was how helpful and descriptive the feedback was when the conditions set forth in Arrange() didn’t occur:

JustMockFeedback

It may seem like a no-brainer, but getting an exception that’s helpful both in its type and message is refreshing. That’s the kind of exception I look at and immediately exclaim “oh, I see what the problem is!”

Matchers

If you read my code critically with a clean code eye in the previous section, you should have a bone to pick with me. In my defense, this snippet was taken post red-green and pre-refactor. Can you guess what it is? How about the redundant string literals in the test — “asdf” and “fdsa” are repeated twice as the username and password, respectively. That’s icky. But before I pull local variables to use there, I want to stop and consider something. For the purpose of this test, given its title, I don’t actually care what parameters the Login() method receives — I only care that it’s called. As such, I need a way to tell the mocking framework that I expect this method to be called with some parameters — any parameters. In the world of mocking, this notion of a placeholder is often referred to as a “Matcher” (I believe this is the Mockito term as well).

In JustMock, this is again refreshingly easy. I want to be able to specify exact matches if I so choose, but also to be able to say “match any string” or “match strings that are not null or empty” or “match strings with this custom pattern.” Take a look at the semantics to make this happen:

[TestMethod, Owner("ebd"), TestCategory("Proven"), TestCategory("Unit")]
public void Execute_Invokes_Service_Login()
{
Target = new LoginViewModel(Service) { Username = "asdf", Password = "fdsa" };
Mock.Arrange(() => Service.Login(
Arg.IsAny(),
Arg.Matches(s => !string.IsNullOrEmpty(s))
)).MustBeCalled();
Target.Login.Execute(null);

Mock.Assert(Service);
}

For illustration purposes I’ve inserted line breaks in a way that isn’t normally my style. Look at the Arg.IsAny and Arg.Matches line. What this arrangement says is “The mock’s login method must be called with any string for the username parameter and any string that isn’t null or empty for the password parameter.” Hats off to you, JustMock — that’s pretty darn readable, discoverable and intuitive as a reader of this code.

Loose or Strict?

In mocking there is a notion of “loose” versus “strict” mocking. The former is a scenario where some sort of default behavior is supplied by the mocking framework for any methods or properties that may be invoked. So in our example, it would be perfectly valid to call the service’s Login() method whether or not the mock had been setup in any way regarding this method. With strict mocking, the same cannot be said — invoking a method that had not been setup/arranged would result in a runtime exception. JustMock defaults to loose mocking, which is my preference.

Static Methods with Mock as Parameter

Another thing I really like about JustMock is that you arrange and query mock objects by passing them to static methods, rather than invoking instance methods on them. As someone who tends to be extremely leery of static methods, it feels strange to say this, but the thing that I like about it is how it removes the need to context switch as to whether you’re dealing with the mock object itself or the “stub wrapper”. In Moq, for instance, mocking occurs by wrapping the actual object that is the mocking target inside of another class instance, with that outer class handling the setup concerns and information recording for verification. While this makes conceptual sense, it turns out to be rather cumbersome to switch contexts for setting up/verifying and actual usage. Do you keep an instance of the mock around locally or the wrapper stub? JustMock addresses this by having you keep an instance only of the mock object and then letting you invoke different static methods for different contexts.

Conclusion

I’m definitely intrigued enough to keep using this. The tool seems powerful and usage is quite straightforward, intuitive and discoverable. Look for more posts about JustMock in the future, including perhaps some comparisons and a full fledged endorsement, if applicable (i.e. I continue to enjoy it), when I’ve used it for more than a few hours.

By

Merry Christmas!

For all DaedTech readers that celebrate Christmas, here’s hoping yours is Merry. I will be traveling for most of the next week, but will have internet access and time off, so I will most likely have another post or two this week in spite of the holiday.