How MotionScan Technology Works

MotionScan is out to improve gaming facial expressions. Actors sit in the studio, where dozens of cameras capture 3-D video of every facial detail.
MotionScan is out to improve gaming facial expressions. Actors sit in the studio, where dozens of cameras capture 3-D video of every facial detail.
Image courtesy of Depth Analysis

Avid video game players are familiar with the emotional roller coaster their favorite titles elicit. There's the soaring giddiness of winning a frenetic Super Mario Kart race, and the deflating sadness of getting blasted by an enemy in an epic World of Tanks battle.

But gamers don't generally see that same ecstasy and despondency in their animated, on-screen characters. Typically, those digitally rendered heroes have rough features that only a similarly misshapen, computer-generated mother could love.

That's all about to change. With MotionScan facial animation technology, created by an Australian tech company called Depth Analysis, gamers will be seeing character facial expressions that are truer to life than ever before.

Facial animation technology is a specialized version of motion capture technology, which for years has been used to add all sorts of special animation effects to movies. If you've ever watched behind-the-scenes outtakes of animated movies or films that blend animated characters with live action, such as " The Lord of the Rings," you might recognize part of the motion-capture process. Actors in front of blank blue or green backgrounds, don tight-fitting suits studded with little markers (often resembling golf balls) that cameras recognize.

Those markers help the cameras track and record the actor's movements as he moves in front of the backgrounds, which are called blue screens or green screens. The backgrounds are blank, so that the cameras record the actor's performance without any distracting extraneous objects that would clutter the recording or throw off the movement-tracking process.

Later, animators can use these recordings to create a digital skeleton or puppet that moves just like the actor. With the help of powerful software, animators overlay this puppet with whatever wacky and imaginative animated character they like. In effect, the animators become puppeteers of sorts, moving the character through scripted scenes. But these animations often lack accurate human body language and facial expressions.

For game designers, this is a major problem because humans are driven by body language. We're able to recognize about 250,000 facial expressions and depend on visual body language for up to 65 percent of our interactions with other people [source: Pease].

First introduced to the public in early 2010, MotionScan technology aims to create animated characters that convey readable body language. It does so by capturing facial features of performing actors that clearly show the difference between, for instance, disgust and happiness, curiosity and disinterest, and other indicators of a character's emotional state. As you'll see on the next page, this is no easy task.




The Face of the Revolution

It's almost the real thing. Here's an example of the high level of detail that MotionScan can capture.
It's almost the real thing. Here's an example of the high level of detail that MotionScan can capture.
Image courtesy of Depth Analysis

We play video games on flat, two-dimensional TV screens or computer monitors. But our real world is three-dimensional, as are the faces of the people who inhabit it. So, one of the major challenges for game makers is creating faces that have 3-D appeal. That's why MotionScan requires a complex light- and sound-proof studio setup filled with a lot of beefy hardware and software.

Game designers start by recording real flesh-and-blood actors performing while seated in the studio, which is illuminated with smooth, even light that leaves no dark areas or shadows that could throw off the system's accuracy. Markers are applied to the chests and necks of the actors; later in the process, these basically give the software a reference point for substituting the actor's head for that of his or her animated character.

MotionScan is different from regular motion-capture tech because it records so many angles simultaneously, creating a 3-D image on every take. Producers use 32 high-definition (2-megapixel) cameras that cost around $6,000 each; in fact, they're the same cameras used by NASA to record space shuttle launches [source: Gizmodo]. The cameras are meticulously positioned in pairs at specific spots all around, above and below the actor in order to capture all of the nuances of human head and facial movements at up to 30 frames per second. That adds up to about 1 gigabyte of data per second.

The cameras send their data to nine powerful computer servers, which can each process about 300 megabytes per second. These servers power animation software that interprets the video feeds, immediately creating animated textures and shapes that correspond to the actor's movements, all the while keeping images and sounds perfectly synchronized.

This processed data can then be applied to a digital puppet that animators manipulate with great precision. All of this adds up to a major technological accomplishment, and, as you'll see, it's one that takes a lot of work to set up.

Lights, Camera(s), 3-D Action!

The MotionScan setup is as elaborate as it looks, comprising around 1,000 pieces of equipment.
The MotionScan setup is as elaborate as it looks, comprising around 1,000 pieces of equipment.
Image courtesy of Depth Analysis

There are more than 1,000 pieces of gear used in the MotionScan system. It can take technicians almost three working days to set up all of this equipment, which must be aligned to exacting specifications in order to work just right.

In addition to the 16 pairs of cameras aimed at recording facial details, there's also a 33rd camera that provides an overview scene of the studio. This last camera feeds video to a nearby room, from which the director can monitor the recording process and offer feedback to the actor.

The director may occasionally remind performers to remain relatively still. Too much movement makes it harder for MotionScan to detect minute facial details.

After the actor is done performing, game producers can select views generated by many cameras at many different angles. With all of that visual data, animators can duplicate lifelike facial movements and emotions. In the context of an intense game scene, those facial features might disclose critical information to the player.

And because the actor is being shot from so many perspectives, animators can pick and choose the angle they want to use for a specific scene. In addition, the animator can add or subtract different lighting effects, such as intensity and angle, to match the person's face to a dark and gritty bar or to a sunny summer day in a field.

A character who is lying might not be able to make eye contact with the player, and his or her eyebrows and lips probably show some noticeable discomfort. Likewise, if a character is happy to be speaking to the player, a friendly, open smile and wide eyes would communicate that message.

Next, you'll see how one company is using MotionScan to generate some of the most humanlike characters ever to grace an animated video game.

In Your Face, Film!

L.A. Noire is a dark crime game in which the player conducts a lot of interrogations. Interview subjects reveal important clues with their facial expressions.
L.A. Noire is a dark crime game in which the player conducts a lot of interrogations. Interview subjects reveal important clues with their facial expressions.
Image courtesy of Rockstar

The massive computing horsepower driving MotionScan automatically tweaks fine details in each sequence, meaning animators don't have to labor for hours to make each facial expression look more human.

That means shorter filming and programming sessions, lower shooting budgets and quicker progress from concept to market, all saving the developer money and potentially making or breaking the game's profit margin.

Now, all that's left is to see how much gamers notice and appreciate MotionScan's breakthroughs.The game L.A. Noire, which will officially be released in May 2011, is the first type of media to use MotionScan, which was integrated into the game to add lifelike facial animation. Noire is an ominous, mature crime game for adults set in 1940s-era Los Angeles that's designed to play like an interactive movie. In keeping with its film-like aspirations, the game casts Aaron Staton (from television series "Mad Men") in the lead character's role.

The game relies heavily on character development and dialogue, and as such, accurate facial expressions are paramount -- without those nuances, players would have a harder time reading characters and moving through the plot. For example, as this is a crime drama, some characters' faces offer clear clues that they are lying. Interrogation and witness questioning are critical elements to help move the plot forward, so body language and facial expressions are key.

In the future, Depth Analysis, the company behind the trailblazing technology, will probably license it to other video game developers and also film producers. So, one day, you might see the technology pop up in animated feature films.

For now, the company is committed to perfecting and expanding the capabilities of MotionScan. Ultimately, MotionScan's makers would like to capture full-body images, which would be particularly useful for feature filmmakers. However, this is a technological hurdle that MotionScan hasn't yet been able to clear, in large part due to the immense computer power and even more complicated studio construction that would be required.

MotionScan's full-body debut will have to wait. In the meantime, companies are already challenging Depth Analysis' motion capture techniques. Keep reading to see how the competition might find a way to upstage MotionScan with better graphics -- or maybe even by using less sophisticated animation.

Motion's Evolution

Some people argue that game designers should use less realistic characters, which might actually make video games more appealing to players.
Some people argue that game designers should use less realistic characters, which might actually make video games more appealing to players.

Now you know how important body language and facial expressions are to making game characters more captivating. But turning analog facial tics into digital emotions is no easy task. There are about 19 muscles in the human face, and duplicating all of their emotional acrobatics takes innovative programming and a complicated, cutting-edge group of technologies like MotionScan.

Other developers are eager to embrace MotionScan's goals -- and to achieve them better than Depth Analysis. A company called Remedy is working on facial expression technology that records about 64 facial poses; from this base set, animators can create expressions in real time and without additional actor performances. The technology is even said to account for facial blood flow and minute changes in skin color that happen as skin creases during muscle movement.

If they're successful, Remedy and Depth Analysis will both produce the kind of hyper-realistic animated characters that can make for very engrossing games. These humanlike characters are one of many efforts to bridge the so-called "uncanny valley," a theory originally associated with robotics.

When game designers mention the uncanny valley, they're referring to the gap between our perceptions of real people and their animated representations. The more realistic our digital doppelgangers are, the more probable it is that we'll find them disturbing. People are willing to accept cartoony caricatures of human beings -- and even pretty realistic ones -- but overly realistic renderings tend to be perceived as eerie and unsettling.

So, as human characters become more and more lifelike, the uncanny valley effect takes hold and begins to ruin the fun. It seems counterintuitive, but for this reason, those intricately detailed animations might actually detract from a game's ability to grab and hypnotize its players.

Animators and game makers are very much aware of the uncanny valley phenomenon, so they're striving to make their animations ever better. But some critics say game developers should reverse direction, focusing less on producing exact duplications of human movement and putting more effort into maintaining mesmerizing game play. Only then, they insist, do games really become more addictive, more fun and more profitable for the companies that make them.

No matter how MotionScan fares, it seems that animators are bent on making the most humanlike characters ever digitally created. These new developments in animation technology will likely make games more arresting visually, but they could also take gaming to a whole new level, resulting in a higher art form that's more exciting than ever before.

Related Articles

More Great Links


  • Broughall, Nick. "How L.A. Noire Conquered the Uncanny Valley with a Tech Called MotionScan." Dec. 17, 2010. (April 17, 2011).!5714436/how-la-noire-conquered-the-uncanny-valley-with-a-tech-called-motionscan
  • Cowen, Nick. "LA Noire Preview: Finally the Videogame that Feels Like a Film." The Telegraph. Dec. 15, 2010. (April 17, 2011).
  • Davidson, John. "The Most Impressive Thing I Saw at E3." June 25, 2010. (April 17, 2011).
  • Depth Analysis press release. "Groundbreaking New MotionScan Technology Set to Redefine 3D CGI Performances." March 1, 2010. (April 17, 2011).
  • Gallo, Carmine. "Body Language: A Key to Success in the Workplace." Feb. 14, 2007. (April 17, 2011).
  • Goss, Patrick. "The Technology of LA Noire." Jan. 19, 2011. (April 17, 2011).
  • Hartley, Adam. "Max Payne Developer Leans over the Uncanny Valley." April 11, 2011. (April 17, 2011).
  • IGN Entertainment site. "An Interview with Andy Serkis." Jan. 27, 2003. (April 17, 2011).
  • Ingham, Tim. "David Cage: L.A. Noire Tech is 'Interesting Dead End,' Quantic's New Approach is the Future." March 31, 2011. (April 17, 2011).
  • Lord of the Rings Interviews. "Beyond Special Effects." (April 17, 2011).
  • Newsweek video. "Videogames: Organic Motion." March 6, 2007. (April 17, 2011).
  • Nutt, Christian. "Interview: The Tech Designed to Give L.A. Noire's Performances Life." July 22, 2010. (April 17, 2011).
  • Ohannessian, Kevin. "Game Face: L.A. Noire Brings Actors' Full Performance to Gaming." Feb. 4, 2011. (April 17, 2011).
  • Pease, Allan and Barbara. "The Definitive Book of Body Language." Bantam. 2006.
  • Richards, Jonathan. "Lifelike Animation Heralds New Era for Computer Games." The Times. Aug. 18, 2008. (April 17, 2011).
  • Science Daily. "Motion Capture Technology Takes a Leap Forward." June 4, 2009. (April 17, 2011).
  • Science Daily. "Learning from the Dead: What Facial Muscles Can Tell Us About Emotion." June 17, 2008. (April 17, 2011).
  • Shoemaker, Natalie. "Rockstar's 'L.A. Noire' Gets the MotionScan Treatment." Dec. 16, 2010. (April 17, 2011).,2817,2374423,00.asp
  • Singer, Gregory. "The Two Towers: Face to Face with Gollum." March 27, 2003. (April 17, 2011).
  • Strategy Informer. "Remedy Boasts Their Facial Animation will Outclass LA Noire." April 11, 2011. (April 17, 2011).
  • Takahashi, Dean. "Xsens Technologies Captures Every Human Motion with Body Suit." Aug. 4, 2009. (April 17, 2011).
  • Thompson, Clive. "The Undead Zone." June 9, 2004. (April 28, 2011).
  • USA Today. "Tolkien's Trilogy: Adding it All Up." Dec. 15, 2003. (April 17, 2011).