News
Computing Applications News

Alternate Interface Technologies Emerge

Researchers working in human-computer interaction are developing new interfaces to produce greater efficiencies in personal computing and enhance miniaturization in mobile devices.
Posted
  1. Introduction
  2. Increasing Human Performance
  3. Author
  4. Footnotes
  5. Figures
NanoTouch input technology demonstration
NanoTouch, a back-of-device input technology for very small screens on mobile devices and electronic jewelry. The technology demonstrated here by Patrick Baudisch was developed at Microsoft Research and Hasso Plattner Institute.

Hardware engineers continue to pack more processing power into smaller designs, opening up an array of possibilities that researchers say will lead to human-computer interfaces that are more natural and efficient than the traditional hall-marks of personal computing. These smaller designs have given rise to new mobile platforms, where the barrier to further miniaturization no longer is the hardware itself but rather humans’ ability to interact with it. Researchers working in human-computer interaction (HCI) are dedicating effort in both areas, developing interfaces that they say will unlock greater efficiencies and designing new input mechanisms to eliminate some of the ergonomic barriers to further miniaturization in mobile technology.

Patrick Baudisch, a computer science professor at Hasso Plattner Institute in Potsdam, Germany, points out that there are two general approaches to HCI, a field that draws on computer science, engineering, psychology, physics, and several design disciplines. One approach focuses on creating powerful but not always totally reliable interfaces, such as speech or gesture input. The other focuses on creating less complex, more reliable input techniques. Partial to the second approach, Baudisch argues that interfaces developed with simplicity and reliability in mind facilitate an uninterrupted engagement with the task at hand, increasing the opportunity for users to experience what psychologist Mihaly Csikszentmihalyi calls “optimal experience” or “flow.”

Baudisch began his HCI career in the Large Display User Experience group at Microsoft Research, where he focused on how users could interact more effectively with wall displays and other large-format technologies that render traditional input techniques nearly useless. In his current work at the Hasso Plattner Institute, Baudisch focuses on projects designed to facilitate the transition from desktop to mobile computing. “There is a single true computation platform for the masses today,” he says. “It is not the PC and not One Laptop Per Child. It is the mobile phone—by orders of magnitude. This is the exciting and promising reality we need to design for.”

One example of an interface technology that Baudisch designed to facilitate this transition to mobile computing is NanoTouch. While current mobile devices offer advanced capabilities, such as touch input, they must be large enough to manipulate with fingers. The NanoTouch interface, which is designed to sidestep this physical constraint, makes the mobile device appear translucent and moves the touch input to the device’s back side so that the user’s fingers do not block the front display. Baudisch says NanoTouch eliminates the requirement to build interface controls large enough for big fingertips and makes it possible to interact with devices much smaller than today’s handheld computers and smartphones.

In his most recent project, called RidgePad, Baudisch is working on a way to improve the recognition accuracy of touch screens. By monitoring not only the contact area between finger and screen, but also the user’s fingerprint within that contact area, RidgePad reconstructs the exact angle at which the finger touches the display. This additional information allows for more specific touch calibration, and, according to Baudisch, can effectively double the accuracy of today’s touch technology.

Back to Top

Increasing Human Performance

Another HCI researcher focusing on new interface technologies for mobility is Carnegie Mellon University’s Chris Harrison, who points out that while computers have become orders of magnitude more powerful than they were a few decades ago, users continue to rely on the mouse and keyboard, technologies that are approximately 45 and 150 years old, respectively. “That’s analogous to driving your car with ropes and sails,” he says. “It’s this huge disparity that gets me excited about input.” Harrison, a graduate student in CMU’s Human-Computer Interaction Institute, says that because computers have grown so powerful, humans are now the bottleneck in most operations. So the question for Harrison is how to leverage the excess computing power to increase human performance.

For one of Harrison’s projects, increasing human performance benefitted from his observation that mobile devices frequently rest on large surfaces: Why not use the large surfaces for input? This line of thinking was the birthplace for Harrison’s Scratch Input technology. The idea behind Scratch Input is that instead of picking up your media player to change songs or adjust volume, the media player stays where it is but monitors acoustic information with a tiny, built-in microphone that listens to the table or desk surface. To change the volume or skip to the next track, for example, you simply run your fingernail over the surface of the table or desk using different, recognizable scratch gestures. The media player captures the acoustic information propagating through the table’s surface and executes the appropriate command.

In addition to developing Scratch Input, which Harrison says is now mature enough to be incorporated into commercial products, he and his colleagues have been working on multitouch displays that can physically deform to simulate buttons, sliders, arrows, and keypads. “Regular touch screens are great in that they can render a multitude of interfaces, but they require us to look at them,” says Harrison. “You cannot touch type on your iPhone.” The idea with this interface technology, which Harrison calls a shape-shifting display, is to offer some of the flexibility of touch screens while retaining some of the beneficial tactile properties of physical interfaces.

Another interface strategy designed to offer new advantages while retaining some of the benefits of older technology is interpolating force-sensitive resistance (IFSR). Developed by two researchers at New York University, IFSR sensors are based on a method called force-sensitive resistance, which has been used for three decades to create force-sensing buttons for many kinds of devices. However, until Ilya Rosenberg and Ken Perlin collaborated on the IFSR project, it was both difficult and expensive to capture the accurate position of multiple touches on a surface using traditional FSR technology alone. “What we created to address this limitation took its inspiration from human skin, where the areas of sensitivity of touch receptors overlap, thereby allowing for an accurate triangulation of the position of a touch,” says Perlin, a professor of computer science at NYU’s Media Research Lab.

In IFSR sensors, each sensor element detects pressure in an area that overlaps with its neighboring elements. By sampling the values from the touch array, and comparing the output of neighboring elements in software, Rosenberg and Perlin found they could track touch points with an accuracy approaching 150 dots per inch, more than 25 times greater than the density of the array itself. “In designing a new kind of multitouch sensor, we realized from the outset how much more powerful a signal is when properly sampled,” says Rosenberg, a graduate student in NYU’s Media Research Lab. “So we aimed to build an input device that would be inherently anti-aliasing, down to the level of the hardware.”

Recognizing the increased interest in flexible displays, electronic paper, and other technologies naturally suited to their core technology, Rosenberg and Perlin spun off their sensor technology into a startup called Touchco, and now are working with other companies to integrate IFSR into large-touch screens and flexible electronic displays. In addition, the team is looking into uses as diverse as musical instruments, sports shoes, self-monitoring building structures, and hospital beds.


“There is a single true computation platform for the masses today,” says Patrick Baudisch, and it “is the mobile phone—by orders of magnitude. This is the exciting and promising reality we need to design for.”


“It seems that many of the hurdles are largely ones of cultural and economic inertia,” says Perlin. “When a fundamentally improved way of doing things appears, there can be significant time before its impact is fully felt.”

As for the future of these and other novel input technologies, users themselves no doubt will have the final word in determining their utility. Still, researchers say that as input technologies evolve, the recognizable mechanisms for interfacing with computers will likely vanish altogether and be incorporated directly into our environment and perhaps even into our own bodies. “Just as we don’t think of, say, the result of LASIK surgery as an interface, the ultimate descendents of computer interfaces will be completely invisible,” predicts Perlin. “They will be incorporated in our eyes as built-in displays, implanted in our ears as speakers that properly reconstruct 3D spatial sound, and in our fingertips as touch- or haptic-sensing enhancers and simulators.”

On the way toward such seamlessly integrated technology, it’s likely that new interface paradigms will continue to proliferate, allowing for computer interactions far more sophisticated than the traditional mouse and keyboard. CMU’s Harrison predicts that eventually humans will be able to walk up to a computer, wave our hands, speak to it, stare at it, frown, laugh, and poke its buttons, all as a way to communicate with the device. In Harrison’s vision of this multimodal interfacing, computers will be able to recognize nuanced human communication, including voice tone, inflection, and volume, and will be able to interpret a complex range of gestures, eye movement, touch, and other cues.

“If we ever hope for human-computer interaction to achieve the fluidity and expressiveness of human communication, we need to be equally diverse in how we approach interface design,” he says. Of course, not all tools and technologies will require a sophisticated multimodal interface to be perfectly functional. “To advance to the next song on your portable music player, a simple button can be fantastically efficient,” says Harrison. “We have to be diligent in preserving what works, and investigate what doesn’t.”

*  Further Reading

Baudisch, P. and Chu, G. Back-of-device interaction allows creating very small touch devices. Proceedings of the 27th International Conference on Human Factors in Computing Systems, Boston, MA, April 2009.

Csikszentmihalyi, M. Flow: The Psychology of Optimal Experience. HarperCollins, New York, 1990.

Erickson, T. and McDonald, D. W. (eds.)

HCI Remixed: Reflections on Works That Have Influenced the HCI Community. MIT Press, Cambridge, MA, 2008.

Harrison, C. and Hudson, S. E. Scratch input: creating large, inexpensive, unpowered, and mobile finger Input surfaces. Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology, Monterey, CA, October 2008.

Rosenberg, I. and Perlin, K. The UnMouse pad: an interpolating multi-touch force-sensing input pad. ACM Transactions on Graphics 28, 3, August 2009.

Back to Top

Back to Top

Back to Top

Figures

UF1 Figure. NanoTouch, a back-of-device input technology for very small screens on mobile devices and electronic jewelry. The technology demonstrated here by Patrick Baudisch was developed at Microsoft Research and Hasso Plattner Institute.

UF2 Figure. Interpolating force sensitive resistance (IFSR), a multitouch input technology developed at New York University’s Media Research Lab. In this demo, Ilya Rosenberg demonstrates a 24-inch Touchco IFSR sensor that serves as an interactive desktop surface.

UF3 Figure. A prototype shape-shifting ATM display that can assume multiple graphical and tactile states. The display was developed at Carnegie Mellon University’s Human-Computer Interaction Institute.

Back to top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More