Research and Advances
Computing Applications

Making Graphics Physically Tangible

Computer control of the forces exerted on users through haptic interfaces adds physically tangible sensations to 3d imagery.
Posted
  1. Introduction
  2. From Robot Fingers to Haptic Interface
  3. A Force Over Time
  4. Commercial Applications
  5. Bumps and Thwacks
  6. References
  7. Author
  8. Footnotes
  9. Figures
  10. Sidebar: Definitions

Touching a virtual object for the first time can provoke surprise, wonder, delight, even a bit of fear. Not unlike the first time you see a magic trick, this touch startles you because of the sudden “appearance” of something physical, apparently from nothing. But “touching” a virtual object requires a specialized display system, or “haptic interface,” to transmit forces back to your hand or fingers in a way that mimics the sensation of touching real objects. These surfaces let you feel objects created by the computer, much as a graphic display lets you see computer-generated objects.

While true 3D haptic interfaces are still fairly expensive, simpler force-feedback devices are widespread and can be found in today’s high-end gaming interfaces and pointing devices. Here, I take a more personal view of haptics evolution, focusing primarily on developments derived from my team’s research at MIT in the early 1990s. During the spring of 1993, that work produced a new haptic interface that came to be called the PHANTOM [7]. Quickly commercialized due to strong interest from many colleagues and technically progressive corporations, there are now hundreds of PHANTOM haptic interfaces worldwide; for better or worse, they represent an important niche—haptic technology—and may well point the way toward a world in which computer haptics is commonplace. Given my personal involvement with the development of PHANTOM haptic technology from the beginning, my views may seem a bit biased, so I apologize to my many friends in the field for my focus here on PHANTOM-based haptics but hope it serves to introduce this emerging field to a wider audience. I also draw on my research team’s earlier work in robot perception to shed light on our underlying motivations. To be sure, there is satisfaction in watching a technology you helped create take on a life of its own, but it is more important to understand why it has been successful, what it provides, and where it could go.

The PHANTOM interface is an electromechanical device small enough to sit on the surface of a desk and connects to a computer’s input/output port. It has a mechanical arm supporting a pen or thimble interface at its tip, through which it can exert forces on users while tracking their motions. The rendering of haptic objects requires applying forces on users in ways that give them the illusion of touching something. It may be as simple as feeling a wall or as complex as simulating the physical sensation of a surgical procedure. The rendering of forces to create such physical illusions requires three main components: a model of object geometry and material properties; a haptic interface that can track users’ motion and impose forces on them; and a rendering algorithm that generates forces in response to user movement and exploration. It also requires real-time physics and collision-detection computation.

One of the most interesting reactions I know of to the PHANTOM interface came from a blind user several years ago. Upon being given a demonstration in which it was used to enable touching a (virtual) object, he seemed fairly uninterested, stroking his finger over the object’s virtual surface. But when the instructor reminded him there was no “real” object present, he jumped with surprise, suddenly grasping wildly with his other hand at the void in front of him in search of the nonexistent object he had been exploring. Another interesting reaction came from several surgeons who, upon using our early needle biopsy simulations, commented that the needle seemed “dull,” rather than questioning the veracity of the simulation.

What makes these demonstrations so compelling? Even without visual cues, people touching our computer-generated objects often feel (in both senses of the word) they are interacting with real physical objects. More than the novelty of the experience, perhaps there was sufficient fidelity—and familiarity—in the physical sensations generated to make the illusion believably real.1

Increasingly, humans interact with computer-generated 3D visual information. The proliferation of real-time 3D graphics and spatial audio accelerators fills our computers with realistic 3D objects. Adding touch to the ways users interact with these objects represents an opportunity to profoundly expand these digital realities. The advent of haptic interaction in graphical environments could be compared to the contribution sound made to motion pictures and stereo to monophonic sound. Adding to or enhancing a sensory modality brings a consistency and dimension to the experience engaging a user’s attention and interest. With physical interaction providing an alternate and instructive layer of perception to the experience, objects (even those that aren’t really there in the physical world) become more convincingly real and memorable [12].

Haptic interaction is a special sensory modality in that it combines sensing and action. The energy and information flows it represents are bidirectional, so that, as we touch and manipulate objects, we simultaneously change their state and receive information about them. In our interactions with the world, haptics adds not only a compelling dimension to the information we receive, but another means for expressiveness in our actions as well (for the dictionary definition of haptic, see the sidebar “Definitions”).

Back to Top

From Robot Fingers to Haptic Interface

About 15 years ago, my research group at MIT’s Artificial Intelligence Lab was trying to develop a modest robot hand that could feel its way around its physical environment [4]. By measuring tendon tensions and actuator torques, we were able to roughly measure contact and grasping forces (and torques) at the fingertips. But the signal-to-noise ratio was so small the robot was precluded from performing more than the simplest of tasks. In order to improve the robot’s dynamic range in sensing its physical environment, we developed a six-axis force-sensing fingertip [1].

To avoid the complexities of tactile array sensors, we used only fingertip force-sensor information to detect the state of the robot’s contacts with objects [1]. Then, in 1991, one of my students showed me a spectrograph (frequency vs. time vs. intensity plot) of data from a force-sensing robot fingertip exploring the environment. In one experiment, a robot hand was programmed to repeatedly stroke a surface while we recorded the forces and torques exerted on the fingertip; Figure 1 shows an example of the time- varying force spectra we measured. Although the potential information content of these rich signals excited and sometimes even terrified us, designing algorithms that could interpret the information to enable autonomous robot perception seemed a daunting task.

Our subsequent effort to develop techniques in segmentation and interpretation of these signals allowed us to develop a robot (actually a PHANTOM used as a robot) that could feel its way through a maze with its fingertips [3]. While this investigation provided some insight into understanding contact signals, impatience with being unable to make use of this information compelled us to rethink our goals.

We realized we might do something very interesting by inverting the problem we had been trying to solve. Anyone can mimic our robot’s limited “view” of the physical world by simply picking up a stylus or pen and stroking objects in the environment—immediately providing a great deal of sensory information. Although touching objects with tools, such as a stylus, limits the dimension and bandwidth of information conveyed, humans can perceive material shape, hardness, mobility, texture, identity, and many more physical characteristics. The perception skills our PHANTOM-based robot struggled so nobly to acquire are trivial for humans in everyday life.


Objects (even those that aren’t really there in the physical world) become more convincingly real and memorable.


We realized that if we could compute and apply force signals, like those in Figure 1, we might give the user the illusion of touching highly detailed objects. Humans are exquisitely skilled at integrating spatial and temporal variations in force into coherent models of the mechanical world we live in. How to take advantage of this innate human ability quickly became an important focus of our work and the basis for developing the PHANTOM haptic interface. Although the idea of touching virtual objects with force-feedback devices was not new, limiting our emphasis to point-contact interactions freed us from the complexities of higher-dimensional models to focus on feedback fidelity and detail in simulated objets.

Back to Top

A Force Over Time

Our work with “force-sensing fingertips” confirmed the importance of two facts relevant to computer haptics: Valuable event and material property information is contained in the variations in force over time, with useful content even at relatively high frequencies (on the order of hundreds of Hz). Humans are wonderfully sensitive to the high-frequency information that abounds in such contact events as impact, sliding, sticking, slipping, and texture exploration [9]. Important geometric information is contained in the direction of contact forces and how they vary with position and time. Knowledge of this vector helps reveal an object’s overall shape and local curvature. In fact, intentional perturbation of this vector can give the illusion of local curvature variations and used to advantage, much as Phong shading and bump mapping are used in the visual domain, as discussed in [6].

These insights are applied most directly when we assume the person is touching the object through a point of interaction, via either a fingertip or the tip of a stylus. Once we embrace the notion of the point-contact interaction model, encoding and rendering physical objects, as well as device design, are greatly simplified.

Back to Top

Commercial Applications

More recently, there has been a tremendous increase in commercial haptics activity aimed at moving the technology from the laboratories into commercial applications.

Seismic modeling. Faced with enormous quantities of volumetric data, modern mineral and oil prospectors use haptic technology to help visually and haptically look for important features in their data (see Figure 2). Enabling geologists to feel soil density, stratification, and other properties, while seeing the information in 3D, has inspired significant commitment to haptics by a number of major companies, including Shell Oil.

Virtual prototyping. Designers, especially in the aircraft and automobile industries, moving from expensive physical mockups of complex designs to digital designs need effective ways to test the assembleability and maintainability of virtual prototypes. This need has inspired development of haptically accessible virtual environments in which assembly and disassembly can be used to guide final design.

Shape sculpting. In the design community, the need to expressively and quickly create and modify computer-modeled shapes has inspired SensAble Technology, Cambridge, Mass., to develop “digital clay” technology (see Figure 3). Along with familiar and novel sculpting tools, this technology allows users to carry out expressive, free-form shape generation and modifications. It is reasonable to expect industrial designers and animators to soon use such tools to construct and modify their models.

Molecular docking. Simulating the forces of interaction—through haptic feedback—between a ligand and a protein allows chemists to directly manipulate the complex structure. Mapping the natural phenomena of molecular forces into a perceptually accessible domain helps reveal the underlying mechanisms leading to or preventing successful docking in a way not possible before (see Figure 4). Examples include investigating steric and electrostatic forces acting on a complex model, testing of conformational flexibility, and assessing the quality of fit. This information can be factored into the design process to help maximize the bioactivity of new compounds. This work, with origins in the molecular docking research at the University of North Carolina, is now being commercialized, most notably by Interactive Simulations, San Diego.

Surgical simulation and training. Surgery was one of the earliest research topics in computer haptics-based training. But the complexity of rendering compliant biomaterials makes virtual surgery a particularly challenging undertaking. The potential benefits of simulation-based training and preoperative planning have attracted significant research interest and commercial investment. Systems under development are moving toward use in training and certification in several surgical specialties. For example, in machine haptics, surgical telerobots already help humans perform cardiac and abdominal surgery. And it is easy to imagine the convergence of biosimulation and telesurgery in the near future.

Back to Top

Bumps and Thwacks

Now at a stage perhaps analogous to early vector graphics technology, haptics can move in many directions. You can already feel the bumps and thwacks in haptically enabled games. Force-reflecting mice and pens may let you feel the “click” of your buttons and the “mass” and “roughness” of your manuscript pages. Such software packages as the GHOST toolkit from SensAble, the HL language from Stanford University, and extensions to traditional 3D graphics toolkits, including the World Toolkit from Sense8, enable developers to incorporate haptics into new applications. As we develop new paradigms for haptic interaction with 3D information, and as the technology becomes less costly, perceiving, modifying, and creating palpable computer-generated objects should become a common activity. In fact, we fear that designing the look and feel of objects and actions may consume excessive amounts of people’s time, as Web page design often does today; still, it will take time and exploration before the technology is realized and applied routinely.

We also expect that better performance—improved bandwidth, vibration, temperature display, and perhaps even tactile array display—will increase the quality of human-computer interaction. Six-degree-of-freedom interfaces with sufficient performance are just now becoming available and will be useful in such applications as virtual assembly and molecular docking, in which orientation torques are important.

Motors with two orders of magnitude more torque and power density, and materials with similarly higher strength-to-weight ratios might some day make the high-fidelity telemanipulators of Robert Heinlein’s “Waldo” fiction possible. Until then, proper interface design will depend on good-sense hardware, understanding the performance requirements dictated by human psychophysics, and finding applications that uniquely benefit from a haptic investment.

Integration of multiple sensory modalities will make the user’s sensory experience more complete. Most of today’s haptic applications do not take advantage of registered (coincident) visual and haptic images. The exceptions, including the Virtual Workbench telesurgical system from Intuitive Surgical, Mountain View, Calif., and emerging telesurgical workstations, provide a profound sense of reality by making what users touch agree spatially and temporally with what they see. True volumetric graphic displays would vastly expand the user workspace and sense of presence, as would very low-latency, high-resolution, head-mounted displays. The importance of visual cues can’t be ignored; the expectation of what is felt and its appearance as users “touch” it will affect their perception of its properties. Something that looks hard when pressed upon will seem to “feel” hard, even with limited interface stiffness. Similarly, sound cues tell users much about material properties and events; the underlying physics of vibration are closely tied to an object’s material properties.

Collaborative haptic applications may also be on the horizon. In some domains, such as surgery, haptic mentoring is already available, so, for example, an experienced surgeon can take a student’s hand and show how much force to exert to retract a layer of tissue. Soon surgical simulations will allow this collaborative mode of interaction. In mechanical and structural design, several people may want to “view” a design in order to test and modify it, commenting, perhaps, “Hey, this beam feels too compliant; did we size it wrong?” Arranging a new complex molecule may require several people to perceive and manage the many degrees of freedom defining its configuration. We’ve long toyed with the idea of a haptic volleyball game in which the virtual ball is “hit” over the “net” as a packet of time-stamped bits, to a team of haptically enabled players in a laboratory across the country who scramble for a shot and “propel” the digital ball back over the Internet.

Our research team’s NASA-funded MarsScape project is developing ways for planetary science teams, even the public, to “feel” the data from the next Mars landing (see Figure 5). Adding haptic properties to the database of Martian geology—terrain, rock, and soil textures—will allow more integrated navigation of and perhaps insight into the information. Moreover, as actual soil and rock textures are incrementally measured or otherwise inferred, the model’s fidelity will improve, providing a basis for organizing, presenting, and archiving scientific information.


We’ve toyed with the idea of a haptic volleyball game in which the virtual ball is “hit” over the “net” as a packet of time-stamped bits.


In our brainstorming at MIT, we’ve also imagined a place called “the physical room” in which haptic interaction is a central component of a virtual environment. Prompting our thinking were the researchers down the hall who had devised an “intelligent room.” My mechanically inclined research team imagined a classroom-lab-theatre in which teaching, research, demonstration, and play could all occur. A space for perhaps 10 to 40 people would be outfitted with graphic and haptic displays, one for every one or two people in the room. On a given day, we might find a physics class exploring molecular and astrophysical mechanics, surgeons palpating virtual cases, learning to “feel” anomalies, and a lecture on Grecian ceramics in which computer-generated models are passed around, touched, and manipulated. Evenings might include performances of haptic art, collaborative play for people with sensory or cognitive handicaps, or a team of microsurgeons performing a dry run and planning session for the next day’s telemicrorobotic team surgery.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Forces measured while stroking a textured surface.

F2 Figure 2. Feeling seismic data.

F3 Figure 3. Digital clay. An object sculpted using haptically enabled tools.

F4 Figure 4. Manipulating simulated molecules.

F5 Figure 5. MarsScape. Adding haptic information to planetary science.

Back to Top

    1. Bicchi. A, Salisbury, J.K., and Brock, D. Contact sensing from force measurements. Int. J. Robotics Res. 12, 3 (June 1993), 249–262.

    2. Burdea, G. Force and Touch Feedback for Virtual Reality. John Wiley & Sons, New York, 1996.

    3. Eberman, B., and Salisbury, J.K. Segmentation and interpretation of temporal contact force signals. In Proceedings of Experimental Robotics III, Third International Symposium, V. Hayward and O. Khatib, Eds. (Kyoto, Japan, Oct.). Springer-Verlag, London, 1993.

    4. Mason, M., and Salisbury, J.K. Robot Hands and the Mechanics of Manipulation. MIT Press, Cambridge, Mass., 1985.

    5. McLaughlin, J., and Orenstein, B. Haptic rendering of 3D seismic data. In Proceedings of the 2nd PHANTOM Users Group Workshop (Cambridge, Mass., Dec.). MIT AI Lab Tech. Rep. 1617, 1997.

    6. Morgenbesser, H., and Srinivasan M. Force shading for haptic shape perception. In Proceedings of the ASME Dynamic Systems and Control Division. ASME, 1996, pp. 407–412.

    7. Salisbury, J.K., Brock, D., Massie, T., Swarup, N., and Zilles, C. Haptic rendering: Programming touch interaction with virtual objects. In Proceedings of 1995 ACM Symposium on Interactive 3D Graphics (Monterey, Calif., Apr. 9–12). ACM Press, New York, 1995.

    8. Salisbury, J.K., and Srinivasan, M. PHANTOM-based haptic interaction with virtual objects. IEEE Comput. Graph. Appl. 17, 5 (Sept.-Oct. 1997).

    9. Srinivasan M. Haptic interfaces. In Virtual Reality: Scientific and Technical Challenges, N. Durlach and A. Mavor, Eds. Report of the Committee on Virtual Reality Research and Development, National Research Council, National Academy Press, 1995.

    10. Srinivasan, M., and Basdogan, C. Haptics in virtual environments: Taxonomy, research status, and challenges. Comput. Graph. 21, 4 (July/Aug. 1997) (won Computers & Graphics Best Paper Award for 1997).

    11. Srinivasan, M., Cutkosky, M., Howe, R., and Salisbury, J. Human and Machine Haptics. MIT Press, Cambridge, Mass., 1999.

    12. Waldvogel, M. Le Sens du Toucher dans l'Architecture et les Arts (The Tactile Sense in Design), Ph.D. thesis, Swiss Federal Institute of Technology, Zurich, 1999.

    1See [2, 9] for a good overview of these and other haptic interfaces and [11] for a more general view of human and machine haptics, including rendering and algorithmic issues. The Haptics Community Web page (haptic.mech.nwu.edu) and the The Electronic Journal of Haptics Research (www.haptics-e.org) are good places to learn more about the haptics research community. An important reference is Margaret Minsky's collection of haptics literature citations (marg.www.media.mit.edu/people/marg/haptics-bibliography.html).

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More