A new deep learning method can convert a single photo of any flowing material into an animated video running in a seamless loop.
University of Washington (UW) researchers invented the technique, which UW's Aleksander Holynski said requires neither user input nor additional data.
The system predicts the motion that was occurring when a photo was captured, and generates the animation from that information.
The researchers used thousands of videos of fluidly moving material to train a neural network, which eventually was able to spot clues to predict what happened next, enabling the system to ascertain if and in what manner each pixel should move.
The team's “systemic splatting” method forecasts both the future and the past for an image, then blends them into one animation.
From University of Washington News
View Full Article
Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA