Wednesday, February 29, 2012

First experiences with the lathe

Some months ago I set out to fabricate a miniature steam engine, following the plans in Tubal Caine's 1981 book "Building Simple Model Steam Engines."

It's a real challenge to bring oneself up to speed on a project of this nature. At the outset I possessed exactly none of the required tools or raw materials, and little of the know-how. Thank goodness I found TechShop just a few miles down the road.

I decided to begin by machining the flywheel. I figured this would be good practice for the finer lathe work required for the piston, and in any case I would need to cut the "formers" for the boiler out of the same stock. With the introductory metal lathe course fresh in my mind, I innocently ordered a footlong bar of 2" diameter cold rolled steel from

The initial cuts went well enough. The real problem was parting it off at the end. I spent over an hour trying to cut through the darn thing on the lathe before I ran out of time on the machine.

Of course I had cut the piece too short to mount safely in the big bandsaw. A TechShop technician suggested the "chop saw," where the next phase of the ordeal began.

This also took a couple of hours, right up until closing time. Progress slowed when the piece heated up and I had to pause frequently to let it cool. Towards the end I picked it up with a pair of pliers and dropped it into a coffee can full of water, which was immediately brought to a boil. I had no idea that so much energy could be held in a few cubic inches of steel.

The flywheel finally came off. Here you can see the marks made by the saw, as well as the oxidation following its bath.

I went back and surfaced the back side on the lathe, and I found the end result to be "good enough." Ironically, the side that I had so much trouble cutting is smooth and shiny after being faced off, but the more complicated bevel shown above is now rusted from the water in addition to horribly gouged by my amateurish lathe work. I might make a fresh attempt later, but more likely this first piece will set the tone of the entire project.

I am vaguely aware that I ran afoul of the "work hardening" phenomena, and perhaps quenching that steel in water had some side effects that I didn't consider at the time. My main takeaway from this phase of the project is that I need to learn more about metallurgy in order to be an effective machinist.

Sunday, January 29, 2012

Ubuntu Photo Appliance

I recently decided to resurrect my old fit-PC as a time lapse photography appliance. I'm not sure what I'll wind up doing with it, but the original idea was to mount it on the roof in a weatherproof container and have it photograph the sunrise every day, automatically uploading the images to my web site.

I bought the device in 2008. It was the first model in the fit-PC line, pictured at the bottom of this page from the manufacturer. At the time it was the smallest, cheapest x86 I could find. I had a 1 GB compact flash card from some earlier hacking, and I bought an adapter to use this card as an ATA/IDE hard drive. I had the makings of an embedded server with no moving parts, which I never did anything interesting with for three years.

Anyhow, having not booted the thing in a year or so, I set out to get it working again.

PXE Boot

The last time I set up this system, I used a DD-WRT router to PXE boot from the TFTP server on my Macbook. Did you know that Mac OSX ships with a TFTP server installed? Magical. Unfortunately my current router uses the stock firmware.

While the TFTP server works great, the built-in Mac DHCP/BOOTP server leaves much to be desired. Long story short, I found lots of people on the internet trying to make PXE boot work with it, and none succeeding. I spun my wheels for a few hours, and there's not much else to report on that, except my advice not to try it. This was OSX 10.4 Snow Leopard.

In this age of VirtualBox and high-speed internet, I was able to quickly set up a VM with tftpd-hpa and dhcp3-server. I added and configured a bridged network adapter on eth0, then connected my laptop and the fit-PC to an ethernet switch. Total time: less than 1 hour.

Netboot Installation

I thought I remembered having trouble with newer kernels on this hardware, so my first attempts were with an ancient 8.04 hardy netboot image. This was just about unsupported, and was the oldest release still available from the public mirrors. After a few failed attempts I abandoned it in favor of a command-line install of 11.04 natty, which I was able to get working. Later, after I found that I needed a newer version of the webcam package, I also tried installing 11.10 oneiric. The 3.0 linux kernel would not boot on this hardware, so I recommend staying with natty and linux 2.6.

Natty seemed to install well enough, but I got a "No installable Kernel was found in the defined APT sources" error. I manually installed linux-image-generic as suggested by a forum post.

The first time I tried that, the system wouldn't boot. Not having many other options, I tried it again, and realized that it was running out of disk space. I would never have noticed this without checking the debug screen with Alt+F4 at the end. Apparently 1 GB is not enough for an install of the Ubuntu base system anymore. In any case, it's pretty abysmal that the installer doesn't give you any feedback when you're in this state.

I drove to the closest electronics store for a bigger CF card; there was a single 4 GB model tucked away amidst the SDs and MicroSDs. That installed and booted fine, with linux-image-generic.

I did have to tweak the boot options to skip the splash screen. Press shift to get to the Grub menu on boot.

$ emacs /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="nosplash --verbose text"

$ update-grub

Between the kernel incompatibilities and mysterious problems, I ran the netboot installer about ten times. I kept thinking of the old quote, "The definition of insanity is doing the same thing over and over and expecting different results."

The installer hung occasionally on all three distro versions that I tried. Oneiric segfaulted and somehow corrupted the boot partition after an ill-advised do-release-upgrade from natty. These issues raise the possibility of a hardware problem, but I don't really want to think about that now that it's working. In any case, memtest86 passed, and the CF card is fresh.


Wireless was a nightmare when I first started using Linux on my laptop in 2004. I was very happy to see that it has improved enormously since then. For my hardware, it "just worked".

This is my wireless card:

$ hwinfo
34: USB 00.0: 0282 WLAN controller
  [Created at usb.122]
  SysFS ID: /devices/pci0000:00/0000:00:0f.5/usb1/1-4/1-4:1.0
  SysFS BusID: 1-4:1.0
  Hardware Class: network
  Model: "Ralink 802.11 bg WLAN"
  Hotplug: USB
  Vendor: usb 0x18e8 "Ralink"
  Device: usb 0x6238 "802.11 bg WLAN"
  Revision: "0.01"
  Driver: "rt73usb"
  Driver Modules: "rt73usb"
  Device File: wlan0
  Features: WLAN
  Speed: 480 Mbps
  WLAN channels: 1 2 3 4 5 6 7 8 9 10 11 12 13 14
  WLAN frequencies: 2.412 2.417 2.422 2.427 2.432 2.437 2.442 2.447 2.452 2.457 2.462 2.467 2.472 2.484
  WLAN encryption modes: WEP40 WEP104 TKIP CCMP
  WLAN authentication modes: open sharedkey wpa-psk wpa-eap
  Module Alias: "usb:v18E8p6238d0001dc00dsc00dp00icFFiscFFipFF"
  Driver Info #0:
    Driver Status: rt73usb is active
    Driver Activation Cmd: "modprobe rt73usb"
  Config Status: cfg=new, avail=yes, need=no, active=unknown
  Attached to: #32 (Hub)

I installed wireless-tools:

$ iwconfig
wlan0     IEEE 802.11bg  ESSID:off/any  
          Mode:Managed  Access Point: Not-Associated   Tx-Power=0 dBm
          Retry  long limit:7   RTS thr:off   Fragment thr:off
          Power Management:on

I was also able to find my wireless network without any issues:

$ sudo ip link set wlan0 up
$ sudo iwlist wlan0 scan
Cell 04 - Address: {redacted}
          Frequency:2.462 GHz (Channel 11)
          Quality=70/70  Signal level=-40 dBm  
          Encryption key:on
          Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s
                    24 Mb/s; 36 Mb/s; 54 Mb/s
          Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 48 Mb/s
          Extra: Last beacon: 456ms ago
          IE: IEEE 802.11i/WPA2 Version 1
              Group Cipher : TKIP
              Pairwise Ciphers (2) : CCMP TKIP
              Authentication Suites (1) : PSK
          IE: WPA Version 1
              Group Cipher : TKIP
              Pairwise Ciphers (2) : CCMP TKIP
              Authentication Suites (1) : PSK

The wpasupplicant package is required to connect to my WPA2 network.

$ emacs /etc/network/interfaces
auto wlan0
iface wlan0 inet dhcp
wireless-essid 975B
pre-up wpa_supplicant -B -Dwext -iwlan0 -c/etc/wpa_supplicant.conf
post-down killall -q wpa_supplicant

$ sudo dhclient wlan0
$ ifconfig
wlan0     Link encap:Ethernet  HWaddr {redacted}  
          inet addr:  Bcast:  Mask:
          inet6 addr: {redacted}/64 Scope:Link
          RX packets:75 errors:0 dropped:0 overruns:0 frame:0
          TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:21576 (21.5 KB)  TX bytes:2476 (2.4 KB)



I used a midrange webcam, the Logitec c310 HD Webcam. Its chief virtue was that it was available at the Staples down the street, and Google seemed to suggest that it worked under Linux. As with my wireless card, I was cautiously optimistic and ultimately pleased. It also "just worked".

$ hwinfo
35: USB 00.0: 0000 Unclassified device
  [Created at usb.122]
  Unique ID: ADDn.0XCWhgf+Sk0
  Parent ID: k4bc.G_ipYBRd0t3
  SysFS ID: /devices/pci0000:00/0000:00:0f.5/usb1/1-1/1-1:1.0
  SysFS BusID: 1-1:1.0
  Hardware Class: unknown
  Model: "Logitech Unclassified device"
  Hotplug: USB
  Vendor: usb 0x046d "Logitech, Inc."
  Device: usb 0x081b 
  Revision: "0.10"
  Serial ID: "41EDC8E0"
  Driver: "uvcvideo"
  Driver Modules: "uvcvideo"
  Device File: /dev/input/event4
  Device Files: /dev/input/event4, /dev/input/by-id/usb-046d_081b_41EDC8E0-event-if00, /dev/input/by-path/pci-0000:00:0f.5-usb-0:1:1.0-event
  Device Number: char 13:68
  Speed: 480 Mbps
  Module Alias: "usb:v046Dp081Bd0010dcEFdsc02dp01ic0Eisc01ip00"
  Driver Info #0:
    Driver Status: uvcvideo is active
    Driver Activation Cmd: "modprobe uvcvideo"
  Config Status: cfg=new, avail=yes, need=no, active=unknown
  Attached to: #32 (Hub)

The streamer package is all you need to grab a screenshot from the command line.

However, I discovered that streamer is broken in 11.04 due to a simple compilation bug. LTS my ass. The package maintainer didn't even test that binary before packaging it up and publishing it to the Canonical repository?

Ultimately, I just downloaded the oneiric version of the package, along with its dependency xawtv-plugins, and manually installed them with dpkg.

In order to use the /dev/video0 device, your user needs to be a member of the video group.
$ sudo gpasswd --add username video


Linux: Still awesome. Still fucked up when you stray from the beaten path.

I love that little fit-PC, and I'm glad it's being put to better use than gathering dust in my closet. Their new models look great too.

On the other hand, if I were starting this project today without a load of old hardware to dispose of, I would probably opt for a used cell phone rather than a semi-embedded x86. The integrated battery, camera, and wireless make for quite a platform. Although I suspect that phone hacking makes for an even more frustrating experience than the one related above.

Thursday, January 13, 2011

HTTP chunks and onreadystatechange

One of the features of HTTP 1.1 is "chunked transfer encoding". Rather than send a Content-Length header followed by the entire document, it is possible to transmit the body as a series of chunks, each with their own content length declaration. This lets you start sending the beginning of the document before you know how long it's going to be.

It also makes Comet "streaming" possible, letting you trickle down data without the overhead of a full HTTP request for each message. This depends on your browser telling you when new chunks arrive. As you might guess, this isn't supported by Internet Explorer. But all other major browsers that I've tried (Firefox, Chrome, Safari) will fire multiple XMLHttpRequest onreadystatechange events (readyState == 3) as additional parts of the document are received.

Here's MochiWeb's implementation of chunked transfer encoding, which is pretty straightforward:

%% @spec write_chunk(iodata()) -> ok
%% @doc Write a chunk of a HTTP chunked response. If Data is zero length,
%% then the chunked response will be finished.
write_chunk(Data) ->
    case Request:get(version) of
        Version when Version >= {1, 1} ->
            Length = iolist_size(Data),
            send([io_lib:format("~.16b\r\n", [Length]), Data, <<"\r\n">>]);
        _ ->

For each chunk, you send an integer size, followed by a newline, followed by that number of bytes of data, followed by another newline. On the client, the web browser stitches each segment together, appending the data to responseText.

When designing Kanaloa's streaming protocol, I initially took it for granted that each chunk would have its own onreadystatechange event. This made parsing the chunks simple; in my case, I just sent down a valid JSON array in each chunk, kept track of how much responseText I'd already seen on the client, and called JSON.parse on the difference.

The first thing I noticed was that sometimes single chunks would be split across multiple events. I theorized that this resulted from them being put into multiple TCP packets, and indeed limiting the chunk size to the typical TCP segment size seemed to fix this problem.

The next thing I noticed was that sometimes multiple chunks would be concatenated into the same event.  This was also a problem, as JSON.parse needs a valid expression, and '["foo"]["bar"]' wasn't cutting it.

You can see both of these cases demonstrated here:
The small chunks are often concatenated, and the large chunks are split.

I took a look at the TCP packets in Wireshark, and am struct by two things. First, the small messages do in fact arrive as their own separate TCP packets. So the browser is stitching them together into the same event in some cases. They arrive at roughly equal intervals in the case I examined.

Secondly, in the cases where a chunk is split across multiple events, the event boundaries do correspond with the packet boundaries.

So I think we can conclude that the browser simply reads in incoming packets into its responseText buffer, and fires onreadystatechange for each. If your script is still running from the previous event, it just makes the responseText available to you when you get that field, rather than wait to send another event later.

This sort of begs the question whether we could construct a scenario where additional text gets appended to responseText without you being notified, or where it changes between multiple reads to that field by the same JavaScript thread. But I've just about had my fill of this topic for now : )

In the end I had to do what I'd been hoping to avoid from the start, and write my own logic to split the response, rather than trust the events to delineate them. The result may be the world's simplest and least featureful JSON parser, whose only job is to split a string into substrings that encode JSON arrays, which can in turn be properly deserialized. But it seems to work, and because it leaves unterminated arrays untouched, I can now also receive messages of arbitrary size that span multiple chunks.