A new gesture-based interface developed by the Hasso Plattner Institute's Christian Holz and Microsoft Research's Andy Wilson does not require users to memorize a specific set of movements.
With Data Miming, users trace the key components of objects such as a chair or table with their hands and maintain the proportions throughout the mime. "Starting from the observation that humans can effortlessly understand which objects are being described when hand motions are used, we asked why computers can't do the same thing," Holz says.
The system uses a Microsoft Kinect motion-capture camera to create a three-dimensional representation of hand movements. Users activate voxels, or pixels in three dimensions, when they pass their hands through space.
Data Miming understands when enclosed space should be included in representations, which it compares with a database of objects in the form of voxels and selects the closest match. The system correctly recognized 75 percent of mimes in tests, and the intended object was among the top three matches in its database 98 percent of the time.
From New Scientist
View Full Article