Monocular vision can be sufficient for a robot to perform its job as long as its brain can process environmental data rapidly enough, says Imperial College London computer scientist Andrew Davison. Given stereoscopic vision's processing complexity, the preferred technique is Simultaneous Localization And Mapping (SLAM), in which sensors such as laser-based range finders perceive by bouncing beams of light off their environment and timing the return. Davison's goal is to substitute an easy-to-understand — and cheaper — Web camera for the range finders.
Davison is developing a methodology to employ a single, moving video camera to generate continually updated three-dimensional maps that can guide the machine via the collection and integration of images captured from different angles as the camera travels — in real time. The camera requires triangulation ability to produce a usable map for the robot, and collecting enough measurements of the same set of features from different angles, in conjunction with a sufficiently fast computer program, enables the features' position and the camera's location to be calculated. Davison seeks to make his programs hyper-efficient by using standard processors, analyzing the bottlenecks within them, and reducing the number of computational steps.
Davison and his colleagues recently demonstrated this new SLAM model running at 200 frames per second on a camera tossed from hand to hand. This is a high enough rate to track fast movement, so single-camera eyes could be incorporated into flying or jumping robots used to explore areas that are too hazardous for people.
From The Economist
View Full Article
Abstracts Copyright © 2009 Information Inc., Bethesda, Maryland, USA