Computer-human interface researchers at the Massachusetts Institute of Technology (MIT) have developed a prototype control device that predicts what function the user is trying to access based on how the device is handled. "The ideal device would be a generic block, like a bar of soap, that knew the user's intent and could change its interface accordingly," says MIT's Brandon Taylor. Taylor and colleague Michael Bove have developed a device that contains a liquid-crystal display screen on the front and rear, a three-axis accelerometer to measure motion, and 72 sensors on its surface to track the position of the user's fingers. The device was given to users, who were asked to hold it as if it was a remote control, personal digital assistant, camera, game controller, or mobile phone, which gave Taylor and Bove an idea of how users expect the device to be used. Those results were programmed into the device so it will know what users expect it to do when it is held a certain way.
When trained on one person, which produces the best results, the device correctly guesses which mode to enter 95 percent of the time. "From our work, we are convinced that grasp-recognition could be implemented as a useful user interface," Taylor says. He will present his research at ACM's CHI 2009 conference, which takes place April 4-9 in Boston.
From New Scientist
View Full Article