Recent builds of Field include support for Spatial Audio with head position governed by tracking devices. This technology stack is based on Google’s Overtone / Resonance Audio work which enables “high order ambisonic” encoding, rendering, reverberation and decoding to binaural displays. Field provides a simple interface to this engine running in a webpage.
Having inserted the template "spatial_audio2" into your document, you’ll have access to a new variable _.space
. This functions a lot like _.stage
, it’s available everywhere. To get the spatial audio
engine running, open a browser webpage on the usual spot: http://localhost:8090/
— the spatial audio engine is
included in our WebVR engine, but also available at this url when running Oculus / Vive.
At that URL you should see a screen like this:
In order to make sound from a webpage "unprovoked" (as in not in respose to a mouse click), we need a little hack. First alt-click on the box "mouse_down_hack" that arrived with the template:
Then you can click somewhere in the webpage, to give permission to play audio. From that point, the ambisonic audio engine will be running!
Like stage, _.space
has an idea of a named layer that
is created on demand:
That call will open the file and start streaming it to the browser(s). Given a layer
you can do the following with it:
Additionally you can:
See here for a list of materials.
Right now the only other extension to Field needed to let you build things in 3D is to constrain layers to only be visible in particular eyes. This is useful in the one case where you want to display different textures to different eyes (like in the case of the stereo pair viewer). So with code like this:
You can completely rebuild the original stereo pair viewer for Sketch 1. Of course, with Field there’s a lot more you can do, you can place those pairs anywhere you want (try one pair per box), in a timeline, with varying opacity etc.