DaedTech

Stories about Software

By

VisualSVN

Back Story

Recently, I set up a media wiki installation. The surrounding requirements were such that I was forced to use a Windows 2003 server instead of Linux, which would have been my preference, but hey, when life gives you lemons, turn the lemon into a WAMP stack. And that’s just what I did. The installs for Apache, PHP, and MySQL went quite smoothly in spite of having to use Windows, and I was up and running with Mediawiki in no time. As an aside, I should note that I tried out a handful of “all in one” WAMP stack offerings and none of them really seemed to work very well. Doing it by hand actually turned out to suit my needs best.

I should also point out that I don’t really have anything against Windows OS at all–I just prefer Linux when it comes to servers. Less overhead, easier security, and more bang for your RAM/processor buck. And I also prefer not to have constraints imposed on me, but c’est la vie.

So with my new Mediawiki install in place on the WAMP stack, I was thinking about backup schemes. I downloaded and installed MySQL Administrator GUI tool (now EOL’ed in favor of MySQL Workbench, which I think needs to get a little more mature and iron out some kinks) and configured an automated weekly backup of the Mediawiki database. Of course, that’s only half the story with Mediawiki–you also need to back up changes you’ve made to the PHP files themselves, where applicable, and any files and images uploaded to the wiki.

My Old Friend, Subversion

For this backup, I decided to go with subversion, reasoning that Tortoise would make it easy for me to see when I had modified files or when people had uploaded things to the wiki. I’d commit periodically, perhaps in conjunction with my automated DB backups, and this would give me a way to completely recreate my Mediawiki install at any point. If I hosted a subversion server on my server machine, I could commit weekly to it locally, and then update from another machine to ensure that I had an offline copy at all times. Perhaps not the most sophisticated backup scheme in the world, but it would get the job done, and I’d have a subversion server as a bonus.

With a New Twist

So when I went to download a subversion install for my Windows server, I stumbled on VisualSVN. Curious, I read about it a bit and decided to give it a whirl. I downloaded the MSI and installed it locally. I could not have been more pleased with the results. I use a subversion server at home on a Fedora server, and when I set that up, I had to mess around with configuring Apache and do something or another with security certificates, if I recall correctly. It’s been a few years, but I remember it being something of a hassle.

VisualSVN ran me through a wizard, and I was up and running with a password protected, directory-level-configurable subversion server that used secure HTTP. It was literally that simple. Already pleased, I saw that there were two main modes of security–subversion and Active Directory. Subversion was default, and it required me to create users and assign them passwords.

At first, this is what I went with, but I was a little put off to learn that the users wouldn’t be able to change the initial passwords that I assigned them. I was, however, pleased to find that I could control with the finest granularity which users had access to which directories. So I set off in search of a scheme that would allow users to change their own passwords. I switched to the Active Directory setting and started to play with it. What I discovered was that it was a snap to setup users needing to login with their own Windows Domain credentials.

So, viola! I didn’t need to create accounts or assign passwords and ask users to change them. They were already synced up with their own logins. From here, I was able to keep the same granularity by adding users in from the domain rather than manually. As setups went, I had a pretty slick one in about fifteen minutes for subversion.

The only drawback, as I see it, is that users can cache their domain passwords locally so as not to have to supply credentials for every SVN operation that they do. The drawback isn’t that they aren’t queried each time, which would be annoying, but rather that they can store their passwords. Of course, the passwords are encrypted by virtue of the HTTPS protocol, so this is really a small nitpick, and I really see no good way around it short of annoying the repository’s users. Not to mention, other applications like MS Outlook and web browsers offer the same kind of local storage, so it isn’t as if I’m doing something unprecedented.

So, if you get a chance, I highly recommend checking out this tool. The version I installed is a free one, though there is a fairly reasonable enterprise edition that gets you some more bells and whistles (if I recall correctly, it uses your current login credentials instead of a user/pass challenge). I might go to that later if the SVN install gains some traction and users like it.

By

New Ubuntu and Weird Old Sound Cards

In an earlier post, I described how one can go about installing a Belkin USB dongle and a very recent version of Ubuntu on a dinosaur PC. Tonight, I’m going to describe a similar thing, except that instead a new piece of hardware, it’s a dinosaur that came with the PC itself. I must admit, this is more likely to be useful to me the next time I encounter this on an old PC than it will be to anyone else, but, hey, you never know.

First, a bit of back story here. As I’ve alluded in previous posts, one of my main interests these days is developing a prototype affordable home automation system with the prototype being my house. So far, I have a server driving lights throughout the house. This can be accessed by any PC on the local network and by my Android phone and iPod. The thing I’m working on now is a scheme for playing music in any room from anywhere in the house. Clearly the main goal of this is to be able to scare my girlfriend by spontaneously playing music when I’m in the basement and she’s in the bedroom, but I think it’s also nice to be able to pull out your phone and queue up some music in the kitchen for while you’re making dinner.

Anyway, one of the cogs in this plan of mine is reappropriating old computers to serve as nodes for playback (the goal being affordability, as I’m not going to buy some kind of $3000 receiver and wire speakers all over the house). I should also mention that I’m using Gmote server for the time being, until I write an Android wrapper app for my web interface. So, for right now, the task is getting these computers onto the network and ready to act as servers for “play song” instructions.

The computers I have for this task are sort of strewn around my basement. They’re machines that are so old that they were simply given to me by various people because I expressed a willingness to thoroughly wipe the hard drive and I’m the only person most people know that’s interested in computers that shipped with Windows 98 and weigh in with dazzling amounts of RAM in the 64-256 megabyte range. These are the recipients of the aforementioned Ubuntu and Belkin dongles.

So, I’ve got these puppies up and humming along with the OS and the wireless networking, and I was feeling pretty good about the prospect of playing music. I setup Gmote, and everything was ready, so I brought my girlfriend in for my triumphant demonstration of playing music through my bedroom’s flatscreen TV, controlled purely by my phone. I plugged in the audio, queued up Gmote, and everything worked perfectly–except that there was no sound. My phone found the old computer, my old computer mounted the home automation server’s music directory (itself mounted on an external drive), Gmote server kicked in… heck, there was even some psychedelic old school graphic that accompanied the song that was playing on the VGA output to the flat screen. But, naturally, no sound.

So, I got out my screwdriver and poked around the internals of the old computer. I reasoned that the sound card must be fried, so I pried open another computer and extracted its card, and put everything back together, and viola! Sound was now functional (half a day later, thus taking a bit of the wind out of my grand unveiling’s sails). So, I pitched the sound card and moved onto getting the next PC functional. This PC had the same sound card, and I got the same result.

I smelled a rat, reasoning that it was unlikely that two largely unused sound cards were fried. After a bit of investigation, I discovered that the problem was the card in question was an ESS ES169, which is actually a plug and play ISA device and not a PCI device. I had reasoned the previous card was fried when I didn’t see it in BIOS PCI list. But, there it was in BIOS ISA list. Because, naturally, a sound card inside of the computer, like a printer or USB external hard drive, is a plug-and-play device.

But anyway, with that figured out, I was all set… kind of. It took me an hour or two of googling and experimenting to figure it out, but I got it. I had to experiment because this card was pretty dated even five years (or roughly the 438 version of Ubuntu) ago, and so I wasn’t dealing with the same utilities or configuration files.

So anyway, with the grand lead up now complete, here is the nitty gritty.

When you boot into Ubuntu, it, like me or any other sane entity, has no idea what this thing is. You’ll see nothing in lspci about it, of course, and if you sudo apt-get install the toolset that gives you lspnp, you’ll see it as an unknown device. Don’t let that get you down, though; it was, at some time, known to someone. The first thing to do is use sudo and your favorite text editor to modify /etc/modules. You’re going to add “snd-es18xx” to that file and save it.

Next, add the following text to the configuration file “/etc/modprobe.d/alsa-base.conf”:

alias sound-slot-0 snd-card-0
alias snd-card-0 snd_es18xx
options snd-es18xx enable=1 isapnp=0 port=0x220 mpu_port=0x388 irq=5 dma1=1 dma2=0

And that’s that. Now, if you reboot you should see a working audio driver and all that goes with it. You can see that it’s working by playing a sound, or by opening up volume control and seeing that you no longer are looking at “Dummy Output” but a realio-trulio sound output.

I confess that I don’t know all of the exact details of what that configuration setup means, but I know enough mundane computer architecture to know that you’re more or less instructing this device on how to handle interrupts like the PCI device it’s pretending to be and Ubuntu thinks it ought to be.

I’d also say that this is by no means a high-performance proposition. I’d probably be better served to get a regular sound card, but there’s just something strangely satisfying about getting a twelve-year-old computer with a stripped down OS to chug along as well as someone’s new, high powered rig that’s been loaded down with unnecessaries. I suppose that’s just the hopeless techie in me.

By

Synergy

One of the things that I take for granted and figured that I probably ought to document is a tool called Synergy (or now, Synergy+, apparently). Found here, this is a tool that is perfect for someone who wants to control multiple computers on a desktop without having to deal with the inconvenience of swapping between multiple keyboards and/or mice. In this fashion, it is a “KM solution” (as opposed to the typical KVM switch). Here is a screenshot of what my office looks like:

My home office

The left monitor and center monitor are attached to a computer running Windows XP. The right monitor is running Fedora and attached to my home server, and the netbook is running the Ubuntu netbook remix. It’s hard to depict, but I’m controlling all three computers with one keyboard and mouse. When I move the mouse off the left edge of the left-most monitor, it goes to the center monitor. This is done by Windows XP itself. When I move the mouse off the right edge of the center monitor, it goes to the right-most monitor running Fedora, and I can then operate that machine as I normally would. When I move the mouse off the bottom of the Fedora monitor, it goes to the netbook.

The netbook is something that I don’t always have there, and Synergy is ‘smart’ enough to figure out how to handle this. If a machine isn’t actively participating, it removes the transitional border. So, if I turn the netbook off, moving the mouse to the bottom of the Fedora server’s screen doesn’t put me in empty space — it behaves as it would without Synergy, for that particular edge. (Things can get a little interesting if I take the netbook elsewhere in the house and forget to turn off Synergy.)

So, how is this accomplished? Well, Synergy must be installed on all participating computers. This is easy enough. For windows, there is an installer that can be downloaded. For Ubuntu, a “sudo apt-get install synergy” should do the trick. For Fedora, “yum install synergy” (with elevated privileges) should do the same. You can do it on any flavor of Linux you like and on a Mac as well.

Next, you need to pick which machine will be the server and which machine(s) will be the client. As a rule of thumb, I would suggest using the machine that you use the most and/or is on and active the most as the server. This will be the computer to which the keyboard and mouse are actually, physically connected. In my case, this is the Windows XP computer (I don’t use the server nearly as often, and anything that gets me out of using laptop keyboards is great). Once installed, starting the clients on other machines is easy. For instance, on both Linux machines, I just type “synergyc” from the command line, and they’re all set to be controlled. The Linux installations have a graphical tool like Windows as well, but as I’ve mentioned before, I tend to favor the command line–particularly in Linux.

The server setup is slightly more complicated, but not bad. If you start the synergy graphical screen, you will have some navigation options. Select “share this computer’s keyboard and mouse (server)” and then click the Configure button below. You will see a screen like this:

Configure Synergy

The first thing to do is to add the screens using the “+” key under “screens.” This is pretty straightforward. Ignoring the options (you can play with these later), it simply wants the workgroup or domain name of the computer to which the monitor is attached. If you have a hodgepodge of OS and haven’t used something like Samba to resolve the naming, you can use their IP addresses and then, if you want, give them aliases to make it a little more descriptive.

In my example, the Windows PC is called “achilles,” the Fedora server is called “Homeserver,” and the netbook is called “echo.” As you can see, relationships are defined between each monitor. X is to the left of Y, and Y is to the right of X, for instance. It is important to specify both sides of the relationship. If you only specify one, the other is not inferred. So if I just specified that Homeserver was to the right of achilles, Synergy would move my mouse from the Windows machine to the Fedora server but never move it back.

Notice the percentages at the bottom. These allow you to specify that only part of the screen passes through. You can use this if, for example, one monitor is substantially larger than the one next to it, and you want to create a logical flow. You might specify that only half of the larger monitor passes through to its neighbor. It is, in fact, possible to create some very weird and convoluted configurations if you choose. You could do something like in one of those crime drama shows where the techie character has eight monitors with two or three different rows vertically.

The real, practical use here is that you can dispense with multiple keyboards and mice and you can do it regardless of the OS running on the various machines. The only requirements for this to work are that the machines have to be connected on the same local network and that the machines each need their own monitor/display. If you are running Synergy and a machine loses connectivity, it will stop working.

And, in this same vein, a word of caution. Synergy works by sending your mouse gestures and typed text across the network, in plain text. That is to say, you wouldn’t want to set this up in a hotel or coffee shop because someone with a sniffer could read everything that you’re typing, including your login passwords for anything to which you sign in. So this is mainly a product for your home network, assuming that you don’t have some adversarial techie in the house interested in spying on you. I’m not specifically aware of any plans on the part of the tool’s authors to offer an encrypted version, but if that does happen, I will likely be excited enough to post about it.

If anyone is interested in more specifics of setup and has questions, feel free to post them.

By

Old Linux Computer with new Wireless Encryption

As I’ve previously mentioned, one of the things that I spend a good bit of time doing is home automation. For prototyping, I have a bunch of very old computers that I’ve acquired for free. These are scattered around the house (much to the dismay of anyone with a sense of decor) and I use them as dumb terminals for interfacing with the home automation system. That is, I have a client/server setup, and these guys put the “thin” in thin-client.

Now, because many of these were made in the 90s, I don’t exactly put Windows 7 (or even Windows XP) on them. Most of them range between 64 and 256 megs of RAM and are of the P-series intel processors from that era. So, the natural choice is Linux. I have had luck using Damn Small Linux (DSL), TinyCore linux, Slackware, and Ubuntu. Since most of these are not for the faint of heart or anyone who isn’t comfortable editing low level configuration files in pico or VI, I’ll focus this post on Ubuntu (more specifically, Xubuntu, since the standard windows manager is a little much for these machines).

Because of the nature of what I’m doing–allowing machines on my network to control things in the house like lights and temperature–network security is not a convenience. It has to be there, it has to be close to bulletproof, and I can’t simply say “the heck with it — I’ll compromise a little on the settings to make configuration easier.” So I use a WPA-PSK encryption scheme with a non-broadcasting network.

Now, my house has three stories including the basement, and while I enjoy home improvement, I’m not such a glutton for punishment that I’ve retrofitted cat-5 connections in every room. Getting these old Linux machines connecting to the network is an interesting problem that I’ve ultimately solved by buying a series of Belkin wireless USB dongles. For the most part, these old computers do have a couple of USB jacks. So what I’ll document is how I’ve had success setting up the networking.

The first thing I do after installing the Xubuntu OS is to go to my main office computer and download ndiswrapper. This link is helpful, as it points you to where you can go about downloading the debian package to install from the command line: Ndiswrapper.. Ubuntu OS generally assumes for the purpose of their package manager (which, as an aside, makes me smile every time someone says that the Android/Apple walled app garden is a newfangled concept) that you have an internet connection or that the CD has the packages that you need. If I had the former, this post would be moot, and the nature of the slimmed-down Xubuntu install precludes the latter.

So, you can find the version for which you need ndiswrapper and grab it from the mirrors. From there, you can install the packages by following the instructions at the link for using dpkg from the command line. After doing so, you will be equipped with everything you need from ndiswrapper. Ndiswrapper is a cool little utility that essentially inserts a logical layer between the drivers and Linux, thus allowing Linux to use Windows drivers as if they were native to that OS. The FOSS folks are generally cool this way — no one writes anything with Linux in mind, so they bend over backwards to be compatible.

Once you have ndiswrapper installed, the next thing to do is to grab the CD that came with the Belkin dongle and pop it into the Linux machine. Mount the CD (if it doesn’t automount — I tend to do all this from TTY4 rather than the UI because when you only have a few hundred meg of RAM, the UI is a little slow) and navigate to the folder containing the .INF file. If you’re doing anything like I am, this is going to be inside of a folder with a name like WinXP2000. The key thing here is to be sure that you find the driver that matches your processor architecture — probably i386. This can easily be accomplished if you know what version of Windows came installed on the machine before you wiped it to put Linux on. If the machine didn’t initially come with Windows, you probably know what you’re doing anyway.

From here, you can execute a “sudo ndiswrapper -i {yourfile}.inf”. This will install the driver in the configurables of the ndiswrapper utility and ndiswrapper should take care of loading it on your next and any subsequent reboots. While you’re at it, you may as well reboot now and get the driver loading. If you’re feeling intrepid, you can try restarting the networking service to see if you start to connect, but I make no guarantees that this will work.

Once you’ve rebooted, Linux should recognize the driver, but you won’t be connecting to your network. I’m not sure off the top what it loads for default settings, but it sure isn’t a requester configured for encrypted access to your network. So now, I edit my /etc/network/interfaces file to look like the following:

auto lo
iface lo inet loopback

auto wlan0
iface wlan0 inet static
address {machine's IP address}
gateway {router's IP address}
dns-namesevers {dns -- probably your router's IP again}
netmask 255.255.255.0
wpa-driver wext
wpa-ssid {network SSID}
wpa-ap-scan 2
wpa-proto WPA RSN
wpa-pairwise TKIP CCMP
wpa-group TKIP CCMP
wpa-key-mgmt WPA-PSK
wpa-psk {connection key}

If you fill in your own info for the {}, you should be set to go. This will configure you as a supplicant (connecting client) to a network with WPA/PSK, static rather than DHCP addresses, and non-broadcasting status (though this doesn’t really matter on Linux — iwlist, the linux utility, sees networks whether or not they broadcast). And, best of all, it will do all of this when you boot since it’s part of the interfaces file. No adding things to rc.local or your login script like in the old days.

The only extra thing here is generating your PSK. That is a little beyond the scope of what I’m explaining here, but if there is some interest in the comments for this post, I can create a follow up explaining how to do that.

I’m not sure how many people are fellow enthusiasts of re-appropriating old clunker machines to do cool, new things, but I hope this helps someone, as these sorts of configuration issues can be maddening.

By

Version Control Beyond Code

One of my favorite OSS tools is Subversion. I’ve used it a great deal professionally for managing source code, and it’s also the tool of choice for my academic collaborations between students not located on campus (which describes me, since my MS program is an online one). It seems a natural progression from CVS, and I haven’t really tried out GIT yet, so I can’t comment as to whether or not I prefer that mode of development.

However, I have, over the last few years, taken to making use of subversion for keeping track of my documents and other personal computing items at home, and I strongly advocate this practice for anyone who doesn’t mind the setup overhead and sense of slight overkill. Before I describe the subversion setup, I’ll describe my situation at home. I have several computers that I use for various activities.  These computers include a personal desktop, a personal netbook, sometimes a company laptop, and a handful of playpen machines running various distros of Linux. I also have a computer that I’ve long since converted into a home server — an old P3 with 384 megs of RAM running Fedora. Not surprisingly, this functions as the subversion server.

One of the annoyances of my pre-personal-subversion life was keeping files in sync. I saw no reason that I shouldn’t be able to start writing a document on my desktop and finish it on my laptop (and that conundrum applies to anyone with more than one PC, rather than being specific to a computer-hoarding techie like me). This was mitigated to some degree by setting up a server with common access, but it was still sort of clunky.

So, I decided to make use of subversion for keeping track of things. Here is a list of advantages that I perceive to this approach:

  • Concurrent edits are not really an issue
  • Creates a de facto backup scheme, since subversion stores the files in its repository and requires them to exist on at least one additional machine for editing
  • Combined with tortoise client for windows, allows you to see which folders/files have been edited since you ‘saved’ changes to the repository Modified subversion folder
  • You can delete local copies of the files and then get them back again by running an update — handy for when you want to work on collaborated stuff but not take up local space. This beats the central storage model, particularly with a laptop, because you can work on a file not on your home network without missing a beat
  • You have a built-in history of your files and can revert to any previous version you like at any time. This is useful for backing up something like Quicken data files that continuously change. Rather than creating hundreds of duplicate files to log your progress over time, you just worry about one and let subversion handle the history.
  • You can easily create permissions as to who has access to what without worrying about administering windows DNS, workgroups, fileshares, and other assorted IT minutiae.
  • Whether you were originally or not, this gives you bonafide experience as an SVN administrator

On the other hand, there are some disadvantages, though I don’t consider them significant:

  • Requires riding the SVN admin learning curve.
  • Requires a server for the task (practically anyway — you can always turn a PC into a SVN server with the file based mode, but I’m not a huge fan)
  • Can be overkill if you don’t have a lot of files and/or sharing going on

So, I would say that the disadvantages apply chiefly to those unfamiliar with SVN or without a real need for this kind of scheme. Once it’s actually in place, I’m hard pressed to think of a downside, and I think you’ll come to find it indispensable.

To set this up, you really only need a server machine and TortoiseSVN (windows) or a subversion client for Linux. I won’t go into the details of server setup with this post, but suffice it to say that you set up the server, install the clients, and you can be off and running. If there is some desire expressed in comments or I get around to it, I can put up another post with a walkthrough of how to set up the server and/or the clients. Mine runs over the HTTP protocol, and I find this to be relatively robust compared to the file protocol and non-overkill compared to the secure protocol involving keys. (Since this is a local install and my wireless network is encrypted with WPA-PSK, I’m not really worried about anyone sniffing the transfers.)