DaedTech

Stories about Software

By

Synergy

One of the things that I take for granted and figured that I probably ought to document is a tool called Synergy (or now, Synergy+, apparently). Found here, this is a tool that is perfect for someone who wants to control multiple computers on a desktop without having to deal with the inconvenience of swapping between multiple keyboards and/or mice. In this fashion, it is a “KM solution” (as opposed to the typical KVM switch). Here is a screenshot of what my office looks like:

My home office

The left monitor and center monitor are attached to a computer running Windows XP. The right monitor is running Fedora and attached to my home server, and the netbook is running the Ubuntu netbook remix. It’s hard to depict, but I’m controlling all three computers with one keyboard and mouse. When I move the mouse off the left edge of the left-most monitor, it goes to the center monitor. This is done by Windows XP itself. When I move the mouse off the right edge of the center monitor, it goes to the right-most monitor running Fedora, and I can then operate that machine as I normally would. When I move the mouse off the bottom of the Fedora monitor, it goes to the netbook.

The netbook is something that I don’t always have there, and Synergy is ‘smart’ enough to figure out how to handle this. If a machine isn’t actively participating, it removes the transitional border. So, if I turn the netbook off, moving the mouse to the bottom of the Fedora server’s screen doesn’t put me in empty space — it behaves as it would without Synergy, for that particular edge. (Things can get a little interesting if I take the netbook elsewhere in the house and forget to turn off Synergy.)

So, how is this accomplished? Well, Synergy must be installed on all participating computers. This is easy enough. For windows, there is an installer that can be downloaded. For Ubuntu, a “sudo apt-get install synergy” should do the trick. For Fedora, “yum install synergy” (with elevated privileges) should do the same. You can do it on any flavor of Linux you like and on a Mac as well.

Next, you need to pick which machine will be the server and which machine(s) will be the client. As a rule of thumb, I would suggest using the machine that you use the most and/or is on and active the most as the server. This will be the computer to which the keyboard and mouse are actually, physically connected. In my case, this is the Windows XP computer (I don’t use the server nearly as often, and anything that gets me out of using laptop keyboards is great). Once installed, starting the clients on other machines is easy. For instance, on both Linux machines, I just type “synergyc” from the command line, and they’re all set to be controlled. The Linux installations have a graphical tool like Windows as well, but as I’ve mentioned before, I tend to favor the command line–particularly in Linux.

The server setup is slightly more complicated, but not bad. If you start the synergy graphical screen, you will have some navigation options. Select “share this computer’s keyboard and mouse (server)” and then click the Configure button below. You will see a screen like this:

Configure Synergy

The first thing to do is to add the screens using the “+” key under “screens.” This is pretty straightforward. Ignoring the options (you can play with these later), it simply wants the workgroup or domain name of the computer to which the monitor is attached. If you have a hodgepodge of OS and haven’t used something like Samba to resolve the naming, you can use their IP addresses and then, if you want, give them aliases to make it a little more descriptive.

In my example, the Windows PC is called “achilles,” the Fedora server is called “Homeserver,” and the netbook is called “echo.” As you can see, relationships are defined between each monitor. X is to the left of Y, and Y is to the right of X, for instance. It is important to specify both sides of the relationship. If you only specify one, the other is not inferred. So if I just specified that Homeserver was to the right of achilles, Synergy would move my mouse from the Windows machine to the Fedora server but never move it back.

Notice the percentages at the bottom. These allow you to specify that only part of the screen passes through. You can use this if, for example, one monitor is substantially larger than the one next to it, and you want to create a logical flow. You might specify that only half of the larger monitor passes through to its neighbor. It is, in fact, possible to create some very weird and convoluted configurations if you choose. You could do something like in one of those crime drama shows where the techie character has eight monitors with two or three different rows vertically.

The real, practical use here is that you can dispense with multiple keyboards and mice and you can do it regardless of the OS running on the various machines. The only requirements for this to work are that the machines have to be connected on the same local network and that the machines each need their own monitor/display. If you are running Synergy and a machine loses connectivity, it will stop working.

And, in this same vein, a word of caution. Synergy works by sending your mouse gestures and typed text across the network, in plain text. That is to say, you wouldn’t want to set this up in a hotel or coffee shop because someone with a sniffer could read everything that you’re typing, including your login passwords for anything to which you sign in. So this is mainly a product for your home network, assuming that you don’t have some adversarial techie in the house interested in spying on you. I’m not specifically aware of any plans on the part of the tool’s authors to offer an encrypted version, but if that does happen, I will likely be excited enough to post about it.

If anyone is interested in more specifics of setup and has questions, feel free to post them.

By

Old Linux Computer with new Wireless Encryption

As I’ve previously mentioned, one of the things that I spend a good bit of time doing is home automation. For prototyping, I have a bunch of very old computers that I’ve acquired for free. These are scattered around the house (much to the dismay of anyone with a sense of decor) and I use them as dumb terminals for interfacing with the home automation system. That is, I have a client/server setup, and these guys put the “thin” in thin-client.

Now, because many of these were made in the 90s, I don’t exactly put Windows 7 (or even Windows XP) on them. Most of them range between 64 and 256 megs of RAM and are of the P-series intel processors from that era. So, the natural choice is Linux. I have had luck using Damn Small Linux (DSL), TinyCore linux, Slackware, and Ubuntu. Since most of these are not for the faint of heart or anyone who isn’t comfortable editing low level configuration files in pico or VI, I’ll focus this post on Ubuntu (more specifically, Xubuntu, since the standard windows manager is a little much for these machines).

Because of the nature of what I’m doing–allowing machines on my network to control things in the house like lights and temperature–network security is not a convenience. It has to be there, it has to be close to bulletproof, and I can’t simply say “the heck with it — I’ll compromise a little on the settings to make configuration easier.” So I use a WPA-PSK encryption scheme with a non-broadcasting network.

Now, my house has three stories including the basement, and while I enjoy home improvement, I’m not such a glutton for punishment that I’ve retrofitted cat-5 connections in every room. Getting these old Linux machines connecting to the network is an interesting problem that I’ve ultimately solved by buying a series of Belkin wireless USB dongles. For the most part, these old computers do have a couple of USB jacks. So what I’ll document is how I’ve had success setting up the networking.

The first thing I do after installing the Xubuntu OS is to go to my main office computer and download ndiswrapper. This link is helpful, as it points you to where you can go about downloading the debian package to install from the command line: Ndiswrapper.. Ubuntu OS generally assumes for the purpose of their package manager (which, as an aside, makes me smile every time someone says that the Android/Apple walled app garden is a newfangled concept) that you have an internet connection or that the CD has the packages that you need. If I had the former, this post would be moot, and the nature of the slimmed-down Xubuntu install precludes the latter.

So, you can find the version for which you need ndiswrapper and grab it from the mirrors. From there, you can install the packages by following the instructions at the link for using dpkg from the command line. After doing so, you will be equipped with everything you need from ndiswrapper. Ndiswrapper is a cool little utility that essentially inserts a logical layer between the drivers and Linux, thus allowing Linux to use Windows drivers as if they were native to that OS. The FOSS folks are generally cool this way — no one writes anything with Linux in mind, so they bend over backwards to be compatible.

Once you have ndiswrapper installed, the next thing to do is to grab the CD that came with the Belkin dongle and pop it into the Linux machine. Mount the CD (if it doesn’t automount — I tend to do all this from TTY4 rather than the UI because when you only have a few hundred meg of RAM, the UI is a little slow) and navigate to the folder containing the .INF file. If you’re doing anything like I am, this is going to be inside of a folder with a name like WinXP2000. The key thing here is to be sure that you find the driver that matches your processor architecture — probably i386. This can easily be accomplished if you know what version of Windows came installed on the machine before you wiped it to put Linux on. If the machine didn’t initially come with Windows, you probably know what you’re doing anyway.

From here, you can execute a “sudo ndiswrapper -i {yourfile}.inf”. This will install the driver in the configurables of the ndiswrapper utility and ndiswrapper should take care of loading it on your next and any subsequent reboots. While you’re at it, you may as well reboot now and get the driver loading. If you’re feeling intrepid, you can try restarting the networking service to see if you start to connect, but I make no guarantees that this will work.

Once you’ve rebooted, Linux should recognize the driver, but you won’t be connecting to your network. I’m not sure off the top what it loads for default settings, but it sure isn’t a requester configured for encrypted access to your network. So now, I edit my /etc/network/interfaces file to look like the following:

auto lo
iface lo inet loopback

auto wlan0
iface wlan0 inet static
address {machine's IP address}
gateway {router's IP address}
dns-namesevers {dns -- probably your router's IP again}
netmask 255.255.255.0
wpa-driver wext
wpa-ssid {network SSID}
wpa-ap-scan 2
wpa-proto WPA RSN
wpa-pairwise TKIP CCMP
wpa-group TKIP CCMP
wpa-key-mgmt WPA-PSK
wpa-psk {connection key}

If you fill in your own info for the {}, you should be set to go. This will configure you as a supplicant (connecting client) to a network with WPA/PSK, static rather than DHCP addresses, and non-broadcasting status (though this doesn’t really matter on Linux — iwlist, the linux utility, sees networks whether or not they broadcast). And, best of all, it will do all of this when you boot since it’s part of the interfaces file. No adding things to rc.local or your login script like in the old days.

The only extra thing here is generating your PSK. That is a little beyond the scope of what I’m explaining here, but if there is some interest in the comments for this post, I can create a follow up explaining how to do that.

I’m not sure how many people are fellow enthusiasts of re-appropriating old clunker machines to do cool, new things, but I hope this helps someone, as these sorts of configuration issues can be maddening.

By

Incorporating MS Test unit tests without TFS

My impression (and somebody please correct me if I’m wrong) is that MS Test is really designed to operate in conjunction with Microsoft’s Team Foundation Server (TFS) and MS Build. That is, opting to use MS Test for unit testing when you’re using something else for version control and builds is sort of like purchasing one iProduct when the rest of your computer-related electronics are PC or Linux oriented: you can do what you like with it, provided you do enough tinkering, but in general, the experience is not optimal.

As such, I’m posting here the sum of that tinkering that I think I have been able to parlay into an effective build process. I am operating in an environment where unit test framework, version control, and build technology are already in place, and mine is only to create a policy to make them work together. So, feedback along the lines of “you should use NUnit” is appreciated, but only because I appreciate anyone taking the time to read the blog: it won’t actually be helpful or necessary in this circumstance. MS Test is neither my choice, nor do I particularly like it, but it gets the job done and it isn’t going anywhere at this time.

So, onto the helpful part. Since I’m not using MS Build and I’m not using TFS, I’m more or less restricted to running the unit tests in two modes: through Visual Studio or from the command line using MSTest.exe. If there is way to have a non-MS Build tool use Visual Studio’s IDE, I am unaware of it, and, if it did exist, I’d probably be somewhat skeptical (but then again, I’m a dyed-in-the-wool command line junkie, so I’m not exactly objective).

As such, I figured that the command line was the best way to go and looked up the command line options for MS Test. Of particular relevancy to the process I’m laying out here are the testcontainer, category, and resultsfile switches. I also use the nologo switch, but that seems something of a given since there’s really no reason for a headless build machine to be advertising for Microsoft.

Testcontainer allows specification of a test project DLL to use. Resultsfile allows specification of a file to which the results can be dumped in xml format (so my advice is to append .xml to the end). And the most interesting one, category, allows you to filter the tests based on some meta-data that is defined in the attribute header of the test itself. In my instance, I’m using three possible categories to describe tests: proven, unit, and integration.

The default when you create a test in Microsoft using, say, the Visual Studio code snippet “testc” (type “testc” and then hit tab) is the following:

[TestMethod]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    Assert.IsTrue(true);
}

Excusing the horrendous practice of testing whether true is true, you’ll notice that the attribute tags are empty. This is what we want because this test has not yet been promoted to be included with the build. The first thing that I’ll do is add a tag to it for “Owner” because I believe that it’s good practice to sign unit tests, thus allowing everyone to see who is the owner of a failing unit test, as a good contact point for investigating why the test is broken. This is done as follows:

[TestMethod, Owner("SomeoneOtherThanErik")]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    Assert.IsTrue(true);
}

I’ve signed this one as somebody else because I’m not putting my name on it. But, when you’re not kidding around or sandbagging someone for your badly written test, you probably want to include your actual name.

The next step is the important one in which you assign the test a category or multiple categories, as applicable. In my scenario, we can pick from “unit,” “integration,” and “proven.” “Unit” is assigned to tests that actually exercise only the class under test. “Integration” is assigned to tests that test the interaction between two or more classes. “Proven” means that you’re confident that if the test is broken, it’s because the SUT is broken and not just that the test is poorly written. So, I might have the following:

[TestMethod, Owner("SomeoneOtherThanErik")]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    Assert.IsTrue(true);
}

[TestMethod, Owner("Erik"), TestCategory("Proven"), TestCategory("Unit")]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    var myCut = new Foo();
    Assert.IsTrue(myFoo.DoesSomething());
}

[TestMethod, Owner("Erik"), TestCategory("Integration")]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    Dictionary myDictionary = new Dictionary();
    var myBar = new Bar(myDictionary);
    var myFoo = new Foo(myDictionary);

    myBar.AddElementsToDict();
    myFoo.AddElementsToDict();

    Assert.IsTrue(myDictionary.Count > 1);
}

[TestMethod, Owner("Erik"), Category("Integration")]
public void My_Lightweight_And_Easy_To_Read_Unit_Test()
{
    Foo myFoo = new Foo();
    myFoo.SetGlobalVariables(SomeSingleton.Instance);
    Assert.IsTrue(myFoo.IsGlobalSet);
}

Now, looking at this set of tests, you’ll notice a couple of proven tests: one integration, one unit, and a test that is missing the label “Proven.” With the last test, we leave off the label proven because the class under test has global state and is thus going to be unpredictable and hard to test. Also, with that one, I’ve labeled it integration instead of unit because I consider anything referencing global state to be integration by definition. (Also, as an aside, I would not personally introduce global, static state into a system, nor would I prefer to test classes in which it exists, but as anyone knows, not all of the code that we have to deal with reflects our design choices or is our own creation.)

Now, for the build process itself, I’ve created the following batch script:

@echo off
rem Execute unit tests from the command line
rem By Erik


rem Change this if you want the result stashed in a directory besides ./
set resultsdir=TestResults rem Change this if you want to test different categories set testset=Unit&Proven rem Change this if you want to change the output test name extension (default is Results.xml -- i.e. CustomControlsTestResults.xml) set resultsextensions=Results.xml rem If results dir does not exist, create it if exist %resultsdir% goto direxists mkdir %resultsdir% :direxists rem This allows using the 'file' variable in the for loop setlocal EnableDelayedExpansion rem This is the actual execution of the test run. Delete old results files, and then execute the MSTest exe for /f %%f in ('dir /b *Test.dll') do ( set testfile=%%f set resultsfile=!resultsdir!\!testfile:~0,-4!%resultsextensions% echo Testing !testfile! -- output to !resultsfile! del !resultsfile! "C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\MSTest.exe" /nologo /testcontainer:!testfile! /category:"%testset%" /resultsfile:!resultsfile! ) echo Unit test run complete.

What this does is iterate through the current directory, looking for files that end in Test.dll. As such, it should either be modified or placed in the directory to which all unit test projects are deployed as part of the build. For each Test project that it finds, it runs MS Test, applies the category filter from the top, and dumps the results in a file named Results.xml. In this case, it will run all tests categorized as “Unit” and “Proven.” However, this can be easily modified by changing the filter parameter per the MSTest.exe specifications for the /category command line parameter (supports operations logic and/or/not).

So, from here, incorporating the unit tests into the build will depend to some degree on the nature of your build technology, but it will probably be as simple as parsing the command output from the batch script, parsing the results.xml file, or simply getting a return parameter from the MS Test executable. Some tools may even implicitly know how to handle this.

As I see it, this offers some nice perks. It is possible to allow unit tests to remain in the build even if they have not been perfected yet, and they need not be marked as “Inconclusive” or commented out. In addition, it is possible to have more nuanced build steps where, say, the unit tests are run daily by the build machine, but unproven tests only weekly. And in the event of some refactoring or changes, unit tests that are broken because of requirements changes can be “demoted” from the build until such time as they can be repaired.

I’m sure that some inventive souls can take this further and do even cooler things with it. As I refine this process, I may revisit it in subsequent posts as well.

By

Version Control Beyond Code

One of my favorite OSS tools is Subversion. I’ve used it a great deal professionally for managing source code, and it’s also the tool of choice for my academic collaborations between students not located on campus (which describes me, since my MS program is an online one). It seems a natural progression from CVS, and I haven’t really tried out GIT yet, so I can’t comment as to whether or not I prefer that mode of development.

However, I have, over the last few years, taken to making use of subversion for keeping track of my documents and other personal computing items at home, and I strongly advocate this practice for anyone who doesn’t mind the setup overhead and sense of slight overkill. Before I describe the subversion setup, I’ll describe my situation at home. I have several computers that I use for various activities.  These computers include a personal desktop, a personal netbook, sometimes a company laptop, and a handful of playpen machines running various distros of Linux. I also have a computer that I’ve long since converted into a home server — an old P3 with 384 megs of RAM running Fedora. Not surprisingly, this functions as the subversion server.

One of the annoyances of my pre-personal-subversion life was keeping files in sync. I saw no reason that I shouldn’t be able to start writing a document on my desktop and finish it on my laptop (and that conundrum applies to anyone with more than one PC, rather than being specific to a computer-hoarding techie like me). This was mitigated to some degree by setting up a server with common access, but it was still sort of clunky.

So, I decided to make use of subversion for keeping track of things. Here is a list of advantages that I perceive to this approach:

  • Concurrent edits are not really an issue
  • Creates a de facto backup scheme, since subversion stores the files in its repository and requires them to exist on at least one additional machine for editing
  • Combined with tortoise client for windows, allows you to see which folders/files have been edited since you ‘saved’ changes to the repository Modified subversion folder
  • You can delete local copies of the files and then get them back again by running an update — handy for when you want to work on collaborated stuff but not take up local space. This beats the central storage model, particularly with a laptop, because you can work on a file not on your home network without missing a beat
  • You have a built-in history of your files and can revert to any previous version you like at any time. This is useful for backing up something like Quicken data files that continuously change. Rather than creating hundreds of duplicate files to log your progress over time, you just worry about one and let subversion handle the history.
  • You can easily create permissions as to who has access to what without worrying about administering windows DNS, workgroups, fileshares, and other assorted IT minutiae.
  • Whether you were originally or not, this gives you bonafide experience as an SVN administrator

On the other hand, there are some disadvantages, though I don’t consider them significant:

  • Requires riding the SVN admin learning curve.
  • Requires a server for the task (practically anyway — you can always turn a PC into a SVN server with the file based mode, but I’m not a huge fan)
  • Can be overkill if you don’t have a lot of files and/or sharing going on

So, I would say that the disadvantages apply chiefly to those unfamiliar with SVN or without a real need for this kind of scheme. Once it’s actually in place, I’m hard pressed to think of a downside, and I think you’ll come to find it indispensable.

To set this up, you really only need a server machine and TortoiseSVN (windows) or a subversion client for Linux. I won’t go into the details of server setup with this post, but suffice it to say that you set up the server, install the clients, and you can be off and running. If there is some desire expressed in comments or I get around to it, I can put up another post with a walkthrough of how to set up the server and/or the clients. Mine runs over the HTTP protocol, and I find this to be relatively robust compared to the file protocol and non-overkill compared to the secure protocol involving keys. (Since this is a local install and my wireless network is encrypted with WPA-PSK, I’m not really worried about anyone sniffing the transfers.)

By

Adding CodeRush Templates

Today I’m going to briefly describe one of the cool and slightly more advanced features of CodeRush in a little more detail. But first, a bit of background. One of the many things I found enjoyable about CodeRush is the templated shortcuts, ala VS code snippets but better. I found myself typing “tne-space” a lot for generating:

throw new Exception("");

with my caret placed in the quotes within the exception. However, I would dutifully go back and modify “Exception” so that I wasn’t throwing a generic exception and arousing the ire of best-practices-adherents everywhere. That rained on my parade a bit, as I found the time savings not to be optimized.

I decided that I’d create specific templates for the exceptions that I commonly used, and I am going to document that process here in case anyone may find it helpful. This is a very simple template addition and probably a good foray into creating your own CodeRush templates.

The first thing to do is fire up Visual Studio and launch the CodeRush options, which, in the spirit of CodeRush, can be short-cutted by using Ctrl-Alt-Shift-O. From here, you can select “Editor” from the main menu and then select “Templates.” This will bring up the templates sub-screen:
CodeRush templates

From here, you can either search for “tne” or navigate to “Program Blocks -> Flow -> tne”:

Create duplicate

Now, you will be prompted for a name. Call it “tnioe” for “throw new InvalidOperationException(“”);” (You can call it whatever you prefer to type in.) Next, in the “Expansion” frame, change “Exception” to “InvalidOperationException” and click “Apply.”

New Template

Now, when you exit the options window and type “tnioe-space” in the editor, you will see your new template.

As a bonus, I’m going to describe something that I encountered and was driving me nuts before I figured out how to fix it. CodeRush’s options screen remembers where you last were in its navigation tree for the next time you launch it. However, it is possible somehow to lose the actual view of the main tree and get stuck in whatever sub-options page you were in without being able to get back.

To fix this, go to DevExpress->about and click on the “Settings” button. This will open a folder on your drive containing settings XML files. Close the options window in Visual Studio and then open OptionsDialog.xml. Set the Option with name attribute “SplitOpen” to “True” and you’ll have your normal, sane options back.