Home → News → Lip-Syncing Thanks to Artificial Intelligence → Full Text

Lip-Syncing Thanks to Artificial Intelligence

By Max Planck Institute for Informatics (Germany)

August 29, 2018

[article image]

An international team led by researchers at the Max Planck Institute for Informatics in Germany has developed a system that uses artificial intelligence (AI) to edit the facial expressions of actors in a film to accurately match dubbed voices.

Max Planck's Hyeongwoo Kim says the Deep Video Portraits system uses model-based three-dimensional face performance capture to record the detailed movements and head position of the dubbing actor, then transposes these movements onto the "target" actor to accurately synchronize the lips and facial movements.

The university's Christian Theobalt says the technology "enables us to modify the appearance of a target actor by transferring head pose, facial expressions, and eye motion with a high level of realism."

The technique could significantly reduce the time and expense of dubbing films, and of correcting the gaze and head pose of video-conference participants to simulate a natural conversation setting.

From Max Planck Institute for Informatics (Germany)
View Full Article


Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


No entries found