There are more than 1,000 pieces of gear used in the MotionScan system. It can take technicians almost three working days to set up all of this equipment, which must be aligned to exacting specifications in order to work just right.
In addition to the 16 pairs of cameras aimed at recording facial details, there's also a 33rd camera that provides an overview scene of the studio. This last camera feeds video to a nearby room, from which the director can monitor the recording process and offer feedback to the actor.
The director may occasionally remind performers to remain relatively still. Too much movement makes it harder for MotionScan to detect minute facial details.
After the actor is done performing, game producers can select views generated by many cameras at many different angles. With all of that visual data, animators can duplicate lifelike facial movements and emotions. In the context of an intense game scene, those facial features might disclose critical information to the player.
And because the actor is being shot from so many perspectives, animators can pick and choose the angle they want to use for a specific scene. In addition, the animator can add or subtract different lighting effects, such as intensity and angle, to match the person's face to a dark and gritty bar or to a sunny summer day in a field.
A character who is lying might not be able to make eye contact with the player, and his or her eyebrows and lips probably show some noticeable discomfort. Likewise, if a character is happy to be speaking to the player, a friendly, open smile and wide eyes would communicate that message.
Next, you'll see how one company is using MotionScan to generate some of the most humanlike characters ever to grace an animated video game.