The Face of the Revolution
We play video games on flat, two-dimensional TV screens or computer monitors. But our real world is three-dimensional, as are the faces of the people who inhabit it. So, one of the major challenges for game makers is creating faces that have 3-D appeal. That's why MotionScan requires a complex light- and sound-proof studio setup filled with a lot of beefy hardware and software.
Game designers start by recording real flesh-and-blood actors performing while seated in the studio, which is illuminated with smooth, even light that leaves no dark areas or shadows that could throw off the system's accuracy. Markers are applied to the chests and necks of the actors; later in the process, these basically give the software a reference point for substituting the actor's head for that of his or her animated character.
MotionScan is different from regular motion-capture tech because it records so many angles simultaneously, creating a 3-D image on every take. Producers use 32 high-definition (2-megapixel) cameras that cost around $6,000 each; in fact, they're the same cameras used by NASA to record space shuttle launches [source: Gizmodo]. The cameras are meticulously positioned in pairs at specific spots all around, above and below the actor in order to capture all of the nuances of human head and facial movements at up to 30 frames per second. That adds up to about 1 gigabyte of data per second.
The cameras send their data to nine powerful computer servers, which can each process about 300 megabytes per second. These servers power animation software that interprets the video feeds, immediately creating animated textures and shapes that correspond to the actor's movements, all the while keeping images and sounds perfectly synchronized.
This processed data can then be applied to a digital puppet that animators manipulate with great precision. All of this adds up to a major technological accomplishment, and, as you'll see, it's one that takes a lot of work to set up.