How many different ways can you visualize the same motion capture data of two people in motion? Once movement is transformed into data the possibilities for visualizing the information are endless. This was the question that compelled me to work with two other students to motion capture a pair of dancers and a pair of fighters.
With two tools at our disposal: the Kinect V2 and an Optitrack System, we set out to capture and then render a few different scenes using mainly Maya, Processing, and OpenFrameworks. While my role was mainly in setting up the capture process (both Optitrack and Kinect) as well as getting the data out of those systems, Michelle Ma & Charlotte Stiles developed their own wonderful animations of the data using Maya.
Tools: Optitrack system, Kinect V2, Processing, OpenFrameworks
Deliverable: Videos, gifs, mocap data, code
Team: Charlotte Stiles / Art, Michelle Ma / Art & CS
Role: sensor setup, lighting, filming, data cleanup and processing, point cloud processing
Multi device setup and tracking
To do a good job with the capture, we found two dancers and two actors to perform a few scenes. We tested all the gear and rehearsed our scenes a few times before doing about 4-6 fighting takes and 3-5 dancing takes.
The Optitrack system we had at our disposal only had 8 IR cameras so our range of motion was limited. We also had a limited number of markers available so we suited up our actors full body but with less precision than we would have wished. Finally, we were limited by storage space required to record Kinect V2's live using Microsoft's new Kinect Studio. The software allows you to record a Kinect live and then play it back on your local computer as if the original Kinect were connected. This was a great tool to allow us to get up and running with borrowed laptops, but the recording generates a huge amount of data that quickly fills up a hard drive.
After capturing the actual recording footage, we moved on to processing the data. We exported the data from the Optitrack as FBX files and from the Kinect as a series of csv files containing frame by frame depth information. In Maya we were able to rig a skeleton off of the actual mocap data, at which point each team member worked on their particular animation.
Two Animations, Same Data
Our actors and dancers: Sabrina Clarke, Javier Spivey, Colin-James Whitney, Zachary Fifer.