Software developed by researchers at the University of Surrey and Disney Research could mean film directors will no longer need to reshoot crucial scenes dozens of times until they are satisfied.
The researchers say the FaceDirector system enables a director to seamlessly blend facial images from a couple of video takes to achieve the desired effect. The software analyzes both facial expressions and audio cues, and then identifies frames that correspond between takes using a graph-based framework.
The researchers say the system is able to create a variety of novel, visually plausible versions of performances of actors in close-up and mid-range shots.
"Our research team has shown that a director can exert control over an actor's performance after the shoot with just a few takes, saving both time and money," says Disney Research's Markus Gross.
FaceDirector works with normal two-dimensional video input acquired by standard cameras, without the need for additional hardware or three-dimensional face reconstruction.
"To the best of our knowledge, our work is the first to combine audio and facial features for achieving an optimal nonlinear, temporal alignment of facial performance videos," says Charles Malleson, a Ph.D. student at the University of Surrey's Center for Vision, Speech and Signal Processing.
View Full Article
Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA