This page explains some of the VR support in Field.
Field has extensive support for 2- and 3-d drawing which typically takes place inside a
Stage — a place on the canvas (the main window) or a separate ‘fullscreen’ window that acts as a container for geometry. Geometry is typically fashioned out of
FLines — objects that, in turn, support drawing instructions that describe lines (that might be filled in to create solid areas).
To draw in VR in Field is to draw onto a
Stage that just happens have its contents visible in VR somehow. Field can do this in two ways.
These two techniques are really rather different, but, as far as you are concerned, they both consist of drawing things to the
Stage. This means that you’ll absolutely want to become comfortable with that documentation before proceeding much further. Of course, the
Stage is primarily documented as if its oriented towards 2d drawing, Field secretly draws everything in 3d anyway.
This page also collects, for now, miscellaneous VR related 3d things in Field.
Field’s direct HMD support is the most efficient, richest and most direct (and the most hardware demanding — it only actually works on Windows, although Field is happy to pretend on OS X). To start drawing on an HMD simply insert an ‘Oculus’ capable stage into your sheet. You can grab one from the ‘usual place’, the command palette (ctrl-space), followed by ‘insert from workspace’:
The current one is called ‘graphics.stage.VR’:
That will give you a new Stage, which you’ll have to execute once (just alt-click on it) to initialize it. Upon doing that your Oculus app will start up (and possibly try to sell you something) and more importantly a new, ominously black, window will appear on your desktop:
That black window is the HMD debug display. It’s showing what is showing in the headset, which is currently absolutely nothing. Let’s fix that.
This code draws a little magenta square centered on 0,0,-50. If we look in the right direction in the HMD, we’ll see it:
Two notes on cameras:
Note that the
Stage shows a ‘front’ camera all the time, where the HMD debug window shows exactly what is entering your eyes. To reset where the ‘front’ of the VR experience is, right click on the Stage and select ‘reset’
Once you have fun driving the camera around with the keyboard to find just the right angle on something you might ask yourself, how do I do that with code? See here
You can move the the whole world around by selecting the stage and using the arrow keys. See the documentation for the keyboard camera and the bottom of the page.
The Oculus Rift’s idea of space and Field’s idea of (2d) space are fairly different. To reduce this call
layer.vrDefaults() to initialize the camera and some scaling constants to be much more uniform across our different canvases.
The second kind of VR rendering supported by Field is to send the contents of a stage to a web browser. This typically means a cell-phone (although Firefox on Windows works as well, it isn’t nearly as good as Field’s native HMD support).
All normal 2D Stages support this ‘hack’. To connect your web-browser / cell phone to Field browse to http://YOUR.IP.ADDRESS.HERE:8090/boot. To find your IP address you might need to open your network settings (or you can insert a ‘graphics.stage.webvr’ template following the instructions above and that will tell you at the bottom). This is likely to only work when your cell phone and browser are on the very same wifi network and, even then, its dependent of the whims of the security policies of whoever set up the network.
Field only updates the geometry on your cell phone when it redraws its screen. Make sure you call
_.stage.frame() (or wiggle the canvas with the mouse).
If you draw too much then your cell phone won’t be able to handle it. If you change too much then your WiFi connection won’t be able to handle it.
Texturing is supported, but Field’s video support isn’t. We’ll build towards supporting the kinds of (stereoscopic) video that you can play in web-browsers instead.
If you restart Field or reinitialize the Stage you’ll need to reload your browser page to get new updates.
Right now the only other extension to Field needed to let you build things in 3D is to constrain layers to only be visible in particular eyes. This is useful in the one case where you want to display different textures to different eyes (like in the case of the stereo pair viewer). So with code like this:
You can completely rebuild the original stereo pair viewer for Sketch 1. Of course, with Field there’s a lot more you can do, you can place those pairs anywhere you want (try one pair per box), in a timeline, with varying opacity etc.
Layers also export the following functions:
Vec3returns the position of the inside of the viewer’s head. Note that drawing things here is a recipe for confusion! You might want to offset them by:
Vec3points along your nose.
layer.vrViewerPosition() + layer.vrGazeDirection()is right in front of (and between) your eyes.
Vec3is where the Oculus Touch controller is for the left hand. It ‘points’, as if it were a gun, along:
Vec3direction where the left hand controller is ‘pointing’
It’s worth closing your eyes and thinking, visually, for a moment why, given that each layer has it’s own
layer.camera it’s important that all of these calls go through
layer rather than the
_.stage. The answer is that these are returning positions that are ready to use to draw an
FLine in this layer given the camera associated with the layer. Your head, in short, is not in the space of the layer camera, but the
FLine is — thus the layer needs to help out with the math.
If you have been working through the tutorials on this site diligently you’ll have seen code such as:
These are snippets of code that manipulate positions of things (including the positions of inside pieces of
FLines). Most of this site is dedicated to drawing things in 2D, so all of these operations operate on the ‘x’ and ‘y’ dimensions. But for VR it makes sense to manipulate things in 3D. Thus:
The crucial point here is to get full 3d rotations scales and transformations you should use
Recent builds of Field include support for Spatial Audio with head position governed by tracking devices. This technology stack is based on Google’s Overtone / Resonance Audio work which enables “high order ambisonic” encoding, rendering, reverberation and decoding to binaural displays. Field provides a simple interface to this engine running in a webpage.
Having inserted a WebVR or Oculus-aware stage into the canvas, you’ll have access to a new variable
_.space. This functions a lot like
_.stage, it’s available everywhere. To get the spatial audio engine running, open a browser webpage on the usual spot:
http://YOUR.IP.ADDRESS:8090/boot — the spatial audio engine is included in our WebVR engine, but also available at this url when running Oculus.
_.space has an idea of a named layer that is created on demand:
That call will open the file and start streaming it to the browser(s). Given a
layer you can do the following with it:
Additionally you can:
See here for a list of materials.
For now, a quick hack to playback a Second Order Ambisonic (SoA) soundscape:
Finally, a way of loading point-clouds that have been exported from COLMAP as “Export as text file…” or .ply files without triangles (such as you might get from MeshLab). This is the example code you want: