How MotionScan Technology Works

MotionScan is out to improve gaming facial expressions. Actors sit in the studio, where dozens of cameras capture 3-D video of every facial detail.
Image courtesy of Depth Analysis

Avid video game players are familiar with the emotional roller coaster their favorite titles elicit. There's the soaring giddiness of winning a frenetic Super Mario Kart race, and the deflating sadness of getting blasted by an enemy in an epic World of Tanks battle.

But gamers don't generally see that same ecstasy and despondency in their animated, on-screen characters. Typically, those digitally rendered heroes have rough features that only a similarly misshapen, computer-generated mother could love.

Advertisement

That's all about to change. With MotionScan facial animation technology, created by an Australian tech company called Depth Analysis, gamers will be seeing character facial expressions that are truer to life than ever before.

Facial animation technology is a specialized version of motion capture technology, which for years has been used to add all sorts of special animation effects to movies. If you've ever watched behind-the-scenes outtakes of animated movies or films that blend animated characters with live action, such as " The Lord of the Rings," you might recognize part of the motion-capture process. Actors in front of blank blue or green backgrounds, don tight-fitting suits studded with little markers (often resembling golf balls) that cameras recognize.

Those markers help the cameras track and record the actor's movements as he moves in front of the backgrounds, which are called blue screens or green screens. The backgrounds are blank, so that the cameras record the actor's performance without any distracting extraneous objects that would clutter the recording or throw off the movement-tracking process.

Later, animators can use these recordings to create a digital skeleton or puppet that moves just like the actor. With the help of powerful software, animators overlay this puppet with whatever wacky and imaginative animated character they like. In effect, the animators become puppeteers of sorts, moving the character through scripted scenes. But these animations often lack accurate human body language and facial expressions.

For game designers, this is a major problem because humans are driven by body language. We're able to recognize about 250,000 facial expressions and depend on visual body language for up to 65 percent of our interactions with other people [source: Pease].

First introduced to the public in early 2010, MotionScan technology aims to create animated characters that convey readable body language. It does so by capturing facial features of performing actors that clearly show the difference between, for instance, disgust and happiness, curiosity and disinterest, and other indicators of a character's emotional state. As you'll see on the next page, this is no easy task.

Advertisement

The Face of the Revolution

It's almost the real thing. Here's an example of the high level of detail that MotionScan can capture.
Image courtesy of Depth Analysis

We play video games on flat, two-dimensional TV screens or computer monitors. But our real world is three-dimensional, as are the faces of the people who inhabit it. So, one of the major challenges for game makers is creating faces that have 3-D appeal. That's why MotionScan requires a complex light- and sound-proof studio setup filled with a lot of beefy hardware and software.

Game designers start by recording real flesh-and-blood actors performing while seated in the studio, which is illuminated with smooth, even light that leaves no dark areas or shadows that could throw off the system's accuracy. Markers are applied to the chests and necks of the actors; later in the process, these basically give the software a reference point for substituting the actor's head for that of his or her animated character.

Advertisement

MotionScan is different from regular motion-capture tech because it records so many angles simultaneously, creating a 3-D image on every take. Producers use 32 high-definition (2-megapixel) cameras that cost around $6,000 each; in fact, they're the same cameras used by NASA to record space shuttle launches [source: Gizmodo]. The cameras are meticulously positioned in pairs at specific spots all around, above and below the actor in order to capture all of the nuances of human head and facial movements at up to 30 frames per second. That adds up to about 1 gigabyte of data per second.

The cameras send their data to nine powerful computer servers, which can each process about 300 megabytes per second. These servers power animation software that interprets the video feeds, immediately creating animated textures and shapes that correspond to the actor's movements, all the while keeping images and sounds perfectly synchronized.

This processed data can then be applied to a digital puppet that animators manipulate with great precision. All of this adds up to a major technological accomplishment, and, as you'll see, it's one that takes a lot of work to set up.

Advertisement

Lights, Camera(s), 3-D Action!

The MotionScan setup is as elaborate as it looks, comprising around 1,000 pieces of equipment.
Image courtesy of Depth Analysis

There are more than 1,000 pieces of gear used in the MotionScan system. It can take technicians almost three working days to set up all of this equipment, which must be aligned to exacting specifications in order to work just right.

In addition to the 16 pairs of cameras aimed at recording facial details, there's also a 33rd camera that provides an overview scene of the studio. This last camera feeds video to a nearby room, from which the director can monitor the recording process and offer feedback to the actor.

Advertisement

The director may occasionally remind performers to remain relatively still. Too much movement makes it harder for MotionScan to detect minute facial details.

After the actor is done performing, game producers can select views generated by many cameras at many different angles. With all of that visual data, animators can duplicate lifelike facial movements and emotions. In the context of an intense game scene, those facial features might disclose critical information to the player.

And because the actor is being shot from so many perspectives, animators can pick and choose the angle they want to use for a specific scene. In addition, the animator can add or subtract different lighting effects, such as intensity and angle, to match the person's face to a dark and gritty bar or to a sunny summer day in a field.

A character who is lying might not be able to make eye contact with the player, and his or her eyebrows and lips probably show some noticeable discomfort. Likewise, if a character is happy to be speaking to the player, a friendly, open smile and wide eyes would communicate that message.

Next, you'll see how one company is using MotionScan to generate some of the most humanlike characters ever to grace an animated video game.

Advertisement

In Your Face, Film!

L.A. Noire is a dark crime game in which the player conducts a lot of interrogations. Interview subjects reveal important clues with their facial expressions.
Image courtesy of Rockstar

The massive computing horsepower driving MotionScan automatically tweaks fine details in each sequence, meaning animators don't have to labor for hours to make each facial expression look more human.

That means shorter filming and programming sessions, lower shooting budgets and quicker progress from concept to market, all saving the developer money and potentially making or breaking the game's profit margin.

Advertisement

Now, all that's left is to see how much gamers notice and appreciate MotionScan's breakthroughs.The game L.A. Noire, which will officially be released in May 2011, is the first type of media to use MotionScan, which was integrated into the game to add lifelike facial animation. Noire is an ominous, mature crime game for adults set in 1940s-era Los Angeles that's designed to play like an interactive movie. In keeping with its film-like aspirations, the game casts Aaron Staton (from television series "Mad Men") in the lead character's role.

The game relies heavily on character development and dialogue, and as such, accurate facial expressions are paramount -- without those nuances, players would have a harder time reading characters and moving through the plot. For example, as this is a crime drama, some characters' faces offer clear clues that they are lying. Interrogation and witness questioning are critical elements to help move the plot forward, so body language and facial expressions are key.

In the future, Depth Analysis, the company behind the trailblazing technology, will probably license it to other video game developers and also film producers. So, one day, you might see the technology pop up in animated feature films.

For now, the company is committed to perfecting and expanding the capabilities of MotionScan. Ultimately, MotionScan's makers would like to capture full-body images, which would be particularly useful for feature filmmakers. However, this is a technological hurdle that MotionScan hasn't yet been able to clear, in large part due to the immense computer power and even more complicated studio construction that would be required.

MotionScan's full-body debut will have to wait. In the meantime, companies are already challenging Depth Analysis' motion capture techniques. Keep reading to see how the competition might find a way to upstage MotionScan with better graphics -- or maybe even by using less sophisticated animation.

Advertisement

Motion's Evolution

Some people argue that game designers should use less realistic characters, which might actually make video games more appealing to players.
Hemera/Thinkstock

Now you know how important body language and facial expressions are to making game characters more captivating. But turning analog facial tics into digital emotions is no easy task. There are about 19 muscles in the human face, and duplicating all of their emotional acrobatics takes innovative programming and a complicated, cutting-edge group of technologies like MotionScan.

Other developers are eager to embrace MotionScan's goals -- and to achieve them better than Depth Analysis. A company called Remedy is working on facial expression technology that records about 64 facial poses; from this base set, animators can create expressions in real time and without additional actor performances. The technology is even said to account for facial blood flow and minute changes in skin color that happen as skin creases during muscle movement.

Advertisement

If they're successful, Remedy and Depth Analysis will both produce the kind of hyper-realistic animated characters that can make for very engrossing games. These humanlike characters are one of many efforts to bridge the so-called "uncanny valley," a theory originally associated with robotics.

When game designers mention the uncanny valley, they're referring to the gap between our perceptions of real people and their animated representations. The more realistic our digital doppelgangers are, the more probable it is that we'll find them disturbing. People are willing to accept cartoony caricatures of human beings -- and even pretty realistic ones -- but overly realistic renderings tend to be perceived as eerie and unsettling.

So, as human characters become more and more lifelike, the uncanny valley effect takes hold and begins to ruin the fun. It seems counterintuitive, but for this reason, those intricately detailed animations might actually detract from a game's ability to grab and hypnotize its players.

Animators and game makers are very much aware of the uncanny valley phenomenon, so they're striving to make their animations ever better. But some critics say game developers should reverse direction, focusing less on producing exact duplications of human movement and putting more effort into maintaining mesmerizing game play. Only then, they insist, do games really become more addictive, more fun and more profitable for the companies that make them.

No matter how MotionScan fares, it seems that animators are bent on making the most humanlike characters ever digitally created. These new developments in animation technology will likely make games more arresting visually, but they could also take gaming to a whole new level, resulting in a higher art form that's more exciting than ever before.

Advertisement

Lots More Information

Related Articles

More Great Links

  • Broughall, Nick. "How L.A. Noire Conquered the Uncanny Valley with a Tech Called MotionScan." Gizmodo.com. Dec. 17, 2010. (April 17, 2011).http://gizmodo.com/#!5714436/how-la-noire-conquered-the-uncanny-valley-with-a-tech-called-motionscan
  • Cowen, Nick. "LA Noire Preview: Finally the Videogame that Feels Like a Film." The Telegraph. Dec. 15, 2010. (April 17, 2011).http://www.telegraph.co.uk/technology/video-games/8204296/L-A-Noire-preview-finally-the-videogame-that-feels-like-a-film.html?utm_source=tmg&utm_medium=TD_8204296&utm_campaign=Tech2901
  • Davidson, John. "The Most Impressive Thing I Saw at E3." Gamepro.com. June 25, 2010. (April 17, 2011).http://www.gamepro.com/article/news/215667/the-most-impressive-thing-i-saw-at-e3/
  • Depth Analysis press release. "Groundbreaking New MotionScan Technology Set to Redefine 3D CGI Performances." Depthanalysis.com. March 1, 2010. (April 17, 2011).http://depthanalysis.com/downloads/pr/depth_analysis_announcement_20100301.pdf
  • Gallo, Carmine. "Body Language: A Key to Success in the Workplace." Businessweek.com. Feb. 14, 2007. (April 17, 2011).http://finance.yahoo.com/career-work/article/102425/Body_Language:_A_Key_to_Success_in_the_Workplace
  • Goss, Patrick. "The Technology of LA Noire." Techradar.com. Jan. 19, 2011. (April 17, 2011).http://www.techradar.com/news/gaming/the-technology-of-la-noire-922086?artc_pg=1
  • Hartley, Adam. "Max Payne Developer Leans over the Uncanny Valley." Techradar.com. April 11, 2011. (April 17, 2011).http://www.techradar.com/news/gaming/max-payne-developer-leaps-over-the-uncanny-valley-942208
  • IGN Entertainment site. "An Interview with Andy Serkis." Movies.ign.com. Jan. 27, 2003. (April 17, 2011).http://movies.ign.com/articles/383/383888p4.html
  • Ingham, Tim. "David Cage: L.A. Noire Tech is 'Interesting Dead End,' Quantic's New Approach is the Future." Computerandvideogames.com. March 31, 2011. (April 17, 2011).http://www.computerandvideogames.com/296388/news/david-cage-la-noire-tech-is-interesting-dead-end-quantics-new-approach-is-the-future/
  • Lord of the Rings Interviews. "Beyond Special Effects." Lordoftherings.net. (April 17, 2011).http://www.lordoftherings.net/film/exclusives/editorial/beyondeffects.html
  • Newsweek video. "Videogames: Organic Motion." Newsweek.com. March 6, 2007. (April 17, 2011).http://www.newsweek.com/video/2007/03/06/videogames-organic-motion.html
  • Nutt, Christian. "Interview: The Tech Designed to Give L.A. Noire's Performances Life." Gamasutra.com. July 22, 2010. (April 17, 2011).http://www.gamasutra.com/view/news/29345/Interview_The_Tech_Designed_To_Give_LA_Noires_Performances_Life.php
  • Ohannessian, Kevin. "Game Face: L.A. Noire Brings Actors' Full Performance to Gaming." Fastcompany.com. Feb. 4, 2011. (April 17, 2011).http://www.fastcompany.com/1724050/la-noire-brendan-mcnamara-team-bondi-rockstar-games-motionscan
  • Pease, Allan and Barbara. "The Definitive Book of Body Language." Bantam. 2006.
  • Richards, Jonathan. "Lifelike Animation Heralds New Era for Computer Games." The Times. Aug. 18, 2008. (April 17, 2011).http://technology.timesonline.co.uk/tol/news/tech_and_web/article4557935.ece
  • Science Daily. "Motion Capture Technology Takes a Leap Forward." ScienceDaily.com. June 4, 2009. (April 17, 2011).http://www.sciencedaily.com/releases/2009/06/090602083356.htm
  • Science Daily. "Learning from the Dead: What Facial Muscles Can Tell Us About Emotion." ScienceDaily.com. June 17, 2008. (April 17, 2011).http://www.sciencedaily.com/releases/2008/06/080616205044.htm
  • Shoemaker, Natalie. "Rockstar's 'L.A. Noire' Gets the MotionScan Treatment." Pcmag.com. Dec. 16, 2010. (April 17, 2011).http://www.pcmag.com/article2/0,2817,2374423,00.asp
  • Singer, Gregory. "The Two Towers: Face to Face with Gollum." AWN.com. March 27, 2003. (April 17, 2011).http://www.awn.com/articles/technology/two-towers-face-face-gollum
  • Strategy Informer. "Remedy Boasts Their Facial Animation will Outclass LA Noire." Strategyinformer.com. April 11, 2011. (April 17, 2011).http://www.strategyinformer.com/news/11835/remedy-boasts-their-facial-animation-will-outclass-la-noire
  • Takahashi, Dean. "Xsens Technologies Captures Every Human Motion with Body Suit." Venturebeat.com. Aug. 4, 2009. (April 17, 2011).http://venturebeat.com/2009/08/04/xsens-technologies-captures-every-human-motion-with-body-suit/
  • Thompson, Clive. "The Undead Zone." Slate.com. June 9, 2004. (April 28, 2011).
  • USA Today. "Tolkien's Trilogy: Adding it All Up." USAtoday.com. Dec. 15, 2003. (April 17, 2011).http://www.usatoday.com/life/movies/news/2003-12-12-lotr-adding_x.htm

Advertisement

Loading...