Active Stereo — working towards a stable release

This is a temporary page to collect the source code needed as we move towards a version of the ActiveStereo software that's stable and tested enough to release.

Things that you will need for the current ("recent ATI") version of this trick:

  • PsychToolbox kernel driver source code — this is ripped right out of the wonderful psychtoolbox.org project that's doing careful and difficult work supplying functionality that's missing in the ATI driver stack on OS X right now. For us this means a version of CGDisplayBeamPosition that actually works. It also adds the ability to synchronize the clocks on multiple outputs (this allows you to do active stereo across multiple projectors). This Kext is a 32 bit kext; you'll need to boot your machine into a 32 bit kernel (hold down 3 and 2 when you reboot). After building, copy the product somewhere and then:
sudo chown -R root:wheel PsychtoolboxKernelDriver.kext
sudo chmod -R 755 PsychtoolboxKernelDriver.kext

Then cross your fingers and:

sudo kextload Downloads/PsychtoolboxKernelDriver.kext
  • nv.zip — this is the source code for our 3d vision USB hack. It busy loops on CGDisplayBeamPosition / or the psychtoolbox equivalent and at the top of the display tells the IR transmitter to flip the shutters. It uses the button on the transmitter to let you flip the eye assignments. It will basically peg an entire core doing this. It also contains logic for counting the refresh interval — which never turns out to be precisely 120Hz — since the last thing that you want to happen is for the eyes to flip should this code miss one frame. This means that you can leave this code running for a long time (say, in a gallery) and if the eyes are correctly assigned in the morning, they'll be correctly assigned at closing time.

Things you might need:

  • SwitchResX — OS X sometimes doesn't want to send 120Hz to your projector, and 120Hz is the only refresh rate that this code is going to work for. This utility will let you set a refresh rate. It gets the job done, it's shareware, and you should buy it.
  • libusb — this is what we're using to talk to the 3d vision transmitter.
  • A copy of Windows to install and run the nvidia drivers. This project does not contain the firmware for the device (indeed, it contains no code by Nvidia of any kind). Whenever the device loses power you'll need to upload new firmware to the device. We do that using a virtual machine, but you can also do that using Bootcamp (the firmware survives a reboot of the machine).
  • A first guess offset — this is passed into the nv command with the '-o' parameter. It's a floating point number, positive or negative, near 0 that corrects for the difference between 120.0Hz and the actual refresh rate of your display. All you got to do is get close enough and the command will servo around a little. You'll find this to be graphics card + cable (DVI, HDMI or VGA) dependent.

This current code has been tested on the ATI 5770 and 5870 cards as well as the 6750M graphics hardware in the most recent MacBook Pros; previous versions (included as alternative main-s in nv.c) have been known to work on pretty much every nvidia card we've seen in a Mac. On the one hand this code has run for, literally, several CPU years in various remote locations around the world; on the other it's a horrible, ugly piece of work. Not only does it use one whole core to do its thing, under high system load — when the nv utility gets kicked off of its core — everything will start flickering. As far as I can tell we really ought to be doing this kind of timing sensitive work inside the kernel.

Finally, you'll need some code to actually run on OS X to produce something in stereo.

1. Field

Obviously I like Field. First put Field into stereo mode:

defaults write com.openendedgroup.Field stereo 1

(or you can run Field from the command line with -stereo 1)

then:

canvas = makeFullscreenCanvas()
shader = makeShaderFromElement(_self)
mesh = meshContainer()

canvas.getBothEyes() << shader << mesh

with mesh:
    for n in range(0, 10):
        mesh ** mesh ** [Vector3(n,0,-n), Vector3(1+n,0,-n), Vector3(1+n,1,-n)]

canvas.camera.io_position=Vector3(0.1, 0.1, 0.1)

You can download this sheet here.

2. GLUTStereo

Apple actually have a piece of example code that does stereo.