Researchers at the U.S. Army Research Laboratory (ARL), the Massachusetts Institute of Technology, Carnegie Mellon University, the National Aeronautics and Space Administration's Jet Propulsion Laboratory, and Boston Dynamics have developed software that allows robots to understand verbal instructions, carry out those instructions, and report back.
The robot can accept verbal instructions, interpret gestures, or be controlled via a tablet to return data in the form of maps and images.
The researchers used deep learning to teach the system to identify an object, as well as providing a knowledge base for more detailed information that helps the robot carry out its orders.
Said ARL's Ethan Stump, "The robot can make maps, label objects in those maps, interpret and execute simple commands with respect to those objects, and ask for clarification when there is ambiguity in the command."
From Technology Review
View Full Article
Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA