This week we’ve added two sources of motion to Field that your sketches and projects can draw upon. The first is the ability to read and use motion capture files in your work, the second is the ability to track pixels across video frames. Both features have very similar interfaces, since both are, in effect provide points in stages that move over time.
The goal, as before, is to produce a short animation clip that unfolds and satisfies when played fullscreen. You may build on the texturing work of the previous Sketch (2), or cause a more complex piece of geometry from Sketch 1 to move based on motion capture. Cite motion capture sources and video sources correctly, and ask for help transforming video into .jpg directories if you need it.
Let’s start with a definition of a Motion Capture “File”. For our purposes here, a MoCap file is a set of 3d points that move over time. These are stored in
There are stronger and weaker definitions of motion capture — the very best, carefully cleaned motion capture file (which can cost around $1-10 per second to capture), will give you not just points but a whole, carefully named, skeleton complete with joint rotations. The very worst will simply give you a set of points per time with no coherent connection between a set of points that appear at one moment and any set that comes before or after (that is: the 5th point in the set at time 10 seconds is not necessarily the 5th point at time 10.1 seconds). We’ve picked a happy ‘middle’ kind of motion capture here because we can find lots of free motion capture samples online (see canvas for resources) that are clean enough to fit our definition. You may also encounter motion capture files in odd-ball coordinate systems.
Here’s how we draw it on a
Note the nested loop structure here in the code above. Over 8000 frames, take
time from 0 to 1. For every frame, draw every motion capture point
x where it is at time
time. Having done that, wait (
_.stage.frame()) for a frame to make it out of the monitor.
Field is handling a lot of tedious things here for you, at least, as best it can. Motion capture resources vary a lot in terms of quality. Field is interpolating over missing points (the best it can) and trying to center and scale the data so that it sits near the middle of the canvas
50, 50. Functions that you might be interested in include:
motion.positionAtTime(0.3, 5)— this is the most important call. This gets the position of point
0.3. Points are numbered from 0 to however many points you have in the file. Time is measured from 0 (the start of the file) to 1 (the end of the file).
motion.numPoints()— returns the number of points of motion that are in the file
motion.nameOfPoint(4)— returns the name of point number
4, if it can be dug out of the file. Depending on who captured this you’ll find things like
RCLAVfor right clavicle; sometimes you’ll get
right clavicle, or
right clavical(sic) or, most likely of all
motion.velocityAtTime(0.2, 5)— returns the velocity of point
0.2. Time here goes from 0 to 1 over the duration of the clip (just like
layer.timefor video layers).
motion.velocityAtTime(0.2, 5).length()will give you the ‘speed’ at this time.
motion.center()— returns the center of all of the motion capture data. That’s the average position of everything over all time. This should be close to
50,50,somethingbecause Field is trying to scale and transform the motion capture data to appear in the middle of a 100x100
motion.centerAtTime(0.3)— returns the center of all of the points at time
motion.rotate(position, center, 45)— returns
centerby 45 degrees in the
xzplane. This is just a helper to stop you having to get into full 3d math just yet. Motion Capture is 3 dimensional, so one axis is missing when you draw things on the
Stage. This function helps you rotate around the
yaxis to show you the motion capture from other angles.
Stage— can we paint using motion capture material?
velocityAtTimeto the size of the dots.
Motion capture can be seen as a way of making a hard computer vision problem — locating and tracking parts of a video frame across frames — easy for computers to achieve. Motion capture cameras see the world as a group of very circular white points on a perfect black background. What can we get out of actual, “normal” video frames?
Typically we’ll get less accurate data for two main reasons (primarily because things tend to change appearance as they change position, and can end up look both quite different than they did before and/or they end up looking like something else, someplace else in the scene). But computers can often do fairly well at this problem. To let you explore this, we’ve added a fairly state-of-the-art general object tracker to Field that you can run on your own video resources.
Consider this well commented code:
There are three key lines here:
var trackers = new Trackers(layer)— this takes a layer (which has a video texture associated with it) and builds a
Trackersobject for it. This object can be used to make tracks:
var track = trackers.trackFLine(r)— this takes an
rand tracks it over time. This ‘track’ starts at
layer.timeand moves forward. This call might take a while the first time you call it for any particular
FLine, subsequent calls will be quite fast.
var rNow = track(layer.time)- for a particular time (here we just use
layer.time) where is that shape now? This call returns a new
FLinethats a copy of the original one passed to
trackFLinebut moved (and scaled) to be in the right spot. If the tracker has lost the shape it might return an empty
Other than tracking
FLines you can also ask for points to be tracked:
var track = trackers.trackPoint(x, y, radius) lets you write
point = track(layer.time) to get a new point on the canvas where that point has moved to.
radius controls how large a neighborhood the tracker pays attention to — make this as large as possible (but no larger) for the best tracking.
Ultimately we have tracking like this:
Webcam layers are a lot like texture and video layers, except rather than static texture, or a video with a controllable
layer.time they draw their video contents from a live camera. We create them with
_.stage.withWebcam(). Tracking for them proceeds a little differently, because the computer can’t track anything in advance, because it doesn’t know what the live camera feed will give it.
Here’s the example you need. Let’s make two boxes. One will just show us the video:
Run that and you’ll see the contents of your webcam (you might get an error the first time your run it if your webcam takes more than 5 seconds to initialize, Field will accuse you of writing code that is taking too long, if this happens, just run it again. Second and subsequent calls to
withWebcam complete instantly).
Now let’s track a rectangle. In a second box:
This yields, at least for the perfect circular video tracking targets I wear on my face:
It’s likely that we can tweak the underlying tracker parameters for better performance (in particular the first few frames it spits out seem very unreliable).