New software can predict how objects captured by a computing device's camera will most likely behave.
Researchers at the Allen Institute for Artificial Intelligence (Ai2) in Seattle developed the system, which combines machine learning and three-dimensional (3D) modeling to draw conclusions about the physical properties of a scene.
Ai2's Roozbeh Mottaghi and colleagues converted more than 10,000 images into scenes rendered in a simplified format using a 3D physics engine, and fed the images and 3D representations into a computer running a deep-learning neural network.
Mottaghi says the computer gradually learned to associate a particular scene with certain simple forces and motions. When shown unfamiliar images, the system could suggest the various forces that might be in play.
Mottaghi notes the system does not work perfectly, but more often than not it will draw a sensible conclusion. For example, for an image of a stapler sitting on a desk, the system can determine that if it is pushed across the desk it will fall abruptly to the floor.
Mottaghi says the system could potentially help make robots and other machines less prone to error.
From Technology Review
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA