Sketch 3 — Motion from Video

This week we’ve added two sources of motion to Field that your sketches and projects can draw upon. The first is the ability to read and use motion capture files in your work, the second is the ability to track pixels across video frames. Both features have very similar interfaces, since both are, in effect provide points in stages that move over time.

The goal, as before, is to produce a short animation clip that unfolds and satisfies when played fullscreen. You may build on the texturing work of the previous Sketch (2), or cause a more complex piece of geometry from Sketch 1 to move based on motion capture. Cite motion capture sources and video sources correctly, and ask for help transforming video into .jpg directories if you need it.

Motion capture

Let’s start with a definition of a Motion Capture “File”. For our purposes here, a MoCap file is a set of 3d points that move over time. These are stored in .c3d files.

There are stronger and weaker definitions of motion capture — the very best, carefully cleaned motion capture file (which can cost around $1-10 per second to capture), will give you not just points but a whole, carefully named, skeleton complete with joint rotations. The very worst will simply give you a set of points per time with no coherent connection between a set of points that appear at one moment and any set that comes before or after (that is: the 5th point in the set at time 10 seconds is not necessarily the 5th point at time 10.1 seconds). We’ve picked a happy ‘middle’ kind of motion capture here because we can find lots of free motion capture samples online (see canvas for resources) that are clean enough to fit our definition. You may also encounter motion capture files in odd-ball coordinate systems.

Here’s how we draw it on a Stage:

// load a motion capture file
var motion = new Mocap("/Users/marc/Desktop/HDM_bk_01-01_01_120.c3d")

// make a layer called 'dots'
var layer = _.stage.withName("dots")

// build a 8000 frame long animation from 0 to 1
for (var time of Anim.lineRange(0, 1, 8000))
{
	var dots = new FLine()
	dots.color=vec(1,1,1,1)	
	layer.lines.dots=dots

    // for all of the points in the motion capture file
	for(var x of motion.allPoints())
	{
        // find their position at time 'time'
		var at = motion.positionAtTime(time, x)

        // and draw a circle there of radius 2
		dots.circle(at.x, at.y, 2)		
	}

    // wait until the next frame
	_.stage.frame()
}

Note the nested loop structure here in the code above. Over 8000 frames, take time from 0 to 1. For every frame, draw every motion capture point x where it is at time time. Having done that, wait (_.stage.frame()) for a frame to make it out of the monitor.

This yields:

Field is handling a lot of tedious things here for you, at least, as best it can. Motion capture resources vary a lot in terms of quality. Field is interpolating over missing points (the best it can) and trying to center and scale the data so that it sits near the middle of the canvas 50, 50. Functions that you might be interested in include:

Prompts:

  1. Integrate this with the texture mapping provided by the Stage — can we paint using motion capture material?
  2. Load two different motion capture files and arrive at a strategy for connecting the points of them.
  3. Investigate connecting the points together with lines to suggest skeletons or cats-cradles.
  4. Consider extracting whole lines of motion out of a motion capture file: build something with those shapes.
  5. Consider drawing this motion as a ‘data mapping’ problem: start by mapping velocityAtTime to the size of the dots.
  6. Make your own Norman McLaren’s Pas De Deus. Make your own Trisha Brown’s It’s a Draw.

Motion Tracking

Motion capture can be seen as a way of making a hard computer vision problem — locating and tracking parts of a video frame across frames — easy for computers to achieve. Motion capture cameras see the world as a group of very circular white points on a perfect black background. What can we get out of actual, “normal” video frames?

Typically we’ll get less accurate data for two main reasons (primarily because things tend to change appearance as they change position, and can end up look both quite different than they did before and/or they end up looking like something else, someplace else in the scene). But computers can often do fairly well at this problem. To let you explore this, we’ve added a fairly state-of-the-art general object tracker to Field that you can run on your own video resources.

Consider this well commented code:

// build a layer with a video associated with it
var layer = _.stage.withImageSequence("/Users/marc/temp/boofcv/data/example/background/parking_lot_scan.mp4.dir/")

// and just make an FLine that draws this image directly over everything
var f = new FLine().rect(0,0,100,100)
f.color=vec(1,1,1,1)
layer.bakeTexture(f)
f.filled=true
layer.lines.f = f

// we'll start at time 0
layer.time=0

// make a new set of "Trackers" for this layer that use the underlying video
var trackers = new Trackers(layer)

// we'll have a new layer to put our annotations in
var top = _.stage.withName("annotations")

// let's start with a rectangle in just the right spot
var r = new FLine().rect(36,77,30,20)

// make sure that we can actually se it
r.color=vec(1,0,0,1)
r.thicken = 1

// add it to the 'top' layer
top.lines.clear()
top.lines.r = r

// track this FLine through subsequent frames
var track = trackers.trackFLine(r)

// this is just a standard animation loop
var t = 0.0
while(true)
{
	t = t + 1

	// move forward slowly
	layer.time = t/2000.0

	var rNow = track(layer.time)

    // track(time) tells us where (and how big) that FLine now is
    // n might not contain any drawing instructions if the tracker has lost track
    // of the shape
    top.lines.r = rNow
	
	// pause for a frame to be drawn
	_.stage.frame()
}

There are three key lines here:

Other than tracking FLines you can also ask for points to be tracked: var track = trackers.trackPoint(x, y, radius) lets you write point = track(layer.time) to get a new point on the canvas where that point has moved to. radius controls how large a neighborhood the tracker pays attention to — make this as large as possible (but no larger) for the best tracking.

Ultimately we have tracking like this:

Motion Tracking on Webcam layers

Webcam layers are a lot like texture and video layers, except rather than static texture, or a video with a controllable layer.time they draw their video contents from a live camera. We create them with _.stage.withWebcam(). Tracking for them proceeds a little differently, because the computer can’t track anything in advance, because it doesn’t know what the live camera feed will give it.

Here’s the example you need. Let’s make two boxes. One will just show us the video:

var layer = _.stage.withWebcam()

var t = 0
while(true)
{
	layer.lines.clear()
	var f = new FLine().rect(0,0,100,100)
	layer.bakeTexture(f)

	f.color=vec(1,1,1,1)
	f.filled=true
	f.stroked=false

	layer.lines.add(f)
	_.stage.frame()
}

Run that and you’ll see the contents of your webcam (you might get an error the first time your run it if your webcam takes more than 5 seconds to initialize, Field will accuse you of writing code that is taking too long, if this happens, just run it again. Second and subsequent calls to withWebcam complete instantly).

Now let’s track a rectangle. In a second box:

// secret incantation to import the new Webcam tracker
var TrackersWebcam = Java.type("fieldboof.TrackersWebcam")

// find the webcam layer
var layer = _.stage.withWebcam()

// make a new group of trackers
// (only one of these can be running at any one time)
var webcamTracker = new TrackersWebcam(layer)

// make a new FLine that we're going to move around
var f = new FLine()
f.rect(30,20,20,20)

// track this fline
var tracker = webcamTracker.trackFLine(f)

// let's draw it to a layer above
var annotationLayer = _.stage.withName("a new layer")
while(_.stage.frame())
{
	// this call tells us where 'f' has ended up 
	f = tracker()
	
	// make it red
	f.color=vec(1,0,0,1)

	// and thick
	f.thicken=1
	
	// draw this line
	annotationLayer.lines.f = f	
}

This yields, at least for the perfect circular video tracking targets I wear on my face:

It’s likely that we can tweak the underlying tracker parameters for better performance (in particular the first few frames it spits out seem very unreliable).

Prompts:

  1. By picking points, perhaps at random, perhaps with some other strategy, can one build a drawing that represents or refers to the motion of a scene?
  2. What does it mean to paint one scene (texture) with the motion from another?