Field's graphics system can be used to drive stereo, active shutter displays. While something of a rarity in the Mac world, this part of Field has been very well tested — Almost all of OpenEnded Group's art in the last three years has been in stereo — including a "feature length" experimental film. It's an emerging medium that we're very invested in.
Conventional wisdom is that consumer level 3d is leaving both the Mac and the independent artist / programmer behind — it's being driven by games (not many of those on the mac) and Hollywood 3d blu-ray disks (likewise). However, as far as we can see tell, OS X might be the ideal platform for the independent code-writing stereo artist — it's just that nobody knows that yet.
The short version of this page: an upcoming version of Field will include super-experimental support for the NVidia 3d Vision IR Glasses on OS X and complete, robust support for quad-buffered stereo OpenGL inside Field's graphics system.
The fundamental trick in "stereo" graphics is to figure out how to deliver a one image to the left eyeballs of your audience and a different image to the right. Active shutter stereo does this by having the audience wear glasses that flicker back and forth very quickly and having this flicker carefully synchronized with the images being sent by the computer.
People have been doing active shutter stereo for decades, but obviously we're hearing a lot about it now — what's changed? The principle thing that that's changed is that we now have a range of display devices that can flicker back and forth quickly enough and cleanly enough that your audience doesn't immediately get a headache. These devices include high end digital cinema projectors; low end consumer grade single chip DLP projectors; DLP rear projection TV's and 120Hz LCD TV's and monitors.
But before we address the specific hardware issues surrounding stereo, we have a software issue. Clearly to do this properly we need to be able to alternate between left and right renders of our 3d scene at around 120Hz. But 120Hz is very fast — far faster than we can comfortably render a 3d scene at — what happens if our scene gets so complex that we deliver a frame late? The eyes switch over, left becomes right and vice versa — very painful! And if we start taking 1/30th of a second to render our images we've dropped back down into headache land.
We need some guarantee that, regardless of how complex our scene is, and how long it takes the graphics card to render it, the eyes will still keep getting images at 120Hz until a new pair of images has finished rendering.
This is where "Quad buffered" stereo comes in. Typically OpenGL is "double buffered" — you don't get to see the GPU actually drawing the scene in front of your eyes, rather you are drawing to the "back buffer" that is then "flipped" once the drawing is complete to become the "front buffer". Quad buffered stereo takes this one step further — your "left" and "right" images have their own separate back buffers that are not flipped forward until both have finished drawing. Quad buffered stereo used to be the exclusive domain of high end graphics cards — it was one of the things that differentiated the $2000 graphics card from the $350 graphics card.
But it turns out that all the graphics cards we've tested recently on the mac support quad buffered stereo just fine. That's half the puzzle right there.
The other half is how to synchronize the glasses with the display of the images. Broadly speaking there are three ways of doing this — get the sync from the graphics card, get the sync from the display device, or get the sync from someplace else.
Getting the sync from the graphics card was the other thing that $2000 bought you — a special 3d out sync port. This provides a signal that "labels" the images coming out of the DVI port of the graphics card as "left" or "right". You'd feed that signal either into your glasses directly, or into an IR transmitter for your "wireless" glasses. Indeed, even today, you can purchase a Quadro 4800 "Mac Addition" in order to get your 3d sync port. This card is expensive (~$1800) and, alarmingly, not as fast as recent consumer graphics cards costing a 4th of that price. Over the decades leading up to this "consumer 3d" moment there have been a variety of other ways developed for getting sync out of graphics cards that don't involve paying extra for it.
Recently this technique is beginning to fall by the wayside. It was always a little dangerous, after all, your graphics card can't know when the images are going to be displayed. Sync, ultimately has to come via the projector or the screen. Thus, 3d TV's either have sync-out ports and/or IR transmitters built into them directly. The TV is best placed to know when the photons coming out of the front of it represent one image or another.
A parallel technique is then DLP-Link — this provides sync from the projector / DLP TV, but it doesn't use IR, it uses flashes of light coming out of the display itself. You'll need DLP-Link glasses to read these pulses (for example the XPanD X102 glasses). There are now a whole bunch of "3d-ready" projectors that are being sold for completely reasonable (~$500) prices that actually work great with OS X and consumer graphics cards. You just need to plug in the projector, tell Field you are doing stereo (see below) and off you go.
Enter NVidia's "3d Vision" — consumer level 3d glasses, using a IR transmitter connected to the computer via USB. Possibly an ideal solution — it gives you the best of IR sync'd glasses (can work with LCD's and no white light flashes on DLP devices) but without the expensive graphics card. ATI are rumored to be cooking up something similar.
One problem — it's PC only. NVidia are targeting the big games market (and they've spent a lot of resources hacking their drivers to turn monoscopic games into stereo games). This is an unfortunate state of affairs — we have excellent quad buffered stereo support under OS X and no sensible IR based sync option.
But it's a problem we can fix. Fundamentally there's no rocket science here — the transmitter is an IR LED that turns on and off. Inspired by the completely nascent libnvstusb project we've been able to produce a Mac "driver" for the USB transmitter that works with the 3d projector we have in our studio.
While we're still developing this — and we need your help to expose our driver to a variety of 3d projectors and screens — we can say a few things about how it's going to work. Firstly, you'll need the hardware from NVidia. Secondly, you'll need the software from NVidia — you'll have to install the 3d vision drivers under bootcamp or a virtual machine host on your Mac.
But if you already have a working 3d vision setup on a bootcamp'd Mac then we'd appreciate your support. The more devices we can try this on the better.
Current drivers can be found here ActiveStereoWorkingGroup.
If you are using a DLP-Link based system, or some exotic Quadro / VESA system then you can ignore all of this. Field will probably just work.
A few things you need to know to get started doing stereo in Field (assuming that you are familiar with Field's graphics system). Firstly:
defaults write com.openendedgroup.Field stereo 1
Setting this property (or passing it in as a command line parameter) will make graphics contexts become active stereo when they are created.
Next, you are probably used to seeing something like:
canvas = makeFullScreenCanvas()
...
canvas << shader
This adds shader
to the list of things to render. In Stereo, Field keeps two lists of things to render — one for each eye. So the above code is now wrong — it's only going to attach shader
to the left eye — ouch!.
Different code:
canvas.setSceneListSide(1)
canvas << somethingToTheLeftEye
canvas.setSceneListSide(0)
canvas << somethingToTheRightEye
But quite often, however, you want to draw an object in both eyes (when would you ever not do? when you are doing some elaborate FBO-based offscreen render, you end up having to two two elaborate FBO-based offscreen renders one for each eye). So:
canvas.getBothEyes() << somethingToBothEyes
The final ingredient is camera control. In Field, in stereo model, all cameras are now stereo cameras. A stereo camera has everything a normal camera has, but additional parameters for offsetting the two views that ultimately get sent to each eye.
There are two ways of doing this offsetting — something that's very well explained here.
Field supports both ways — "toed-in" stereo and "Asymmetric frustum parallel axis projection stereo":
# "toed-in" stereo
canvas.camera.setIOPosition(1.3)
# "frustrum shift" stereo
canvas.camera.setIOFrustra(0.3)
You'll very much have to experiment with these settings. In our experience what makes good 3d is very scene, scale and even speed dependent. Happy hacking!
A short snippet of code can be downloaded here.