Research and Advances
Architecture and Hardware Interactive immersion in 3D graphics

Designing Cranial Implants in a Haptic Augmented Reality Environment

Medical sculptors and neurosurgeons create virtual 3D cranial models based on patient CT data superimposed over their hands as if they were sculpting physical models.
Posted
  1. Article
  2. References
  3. Authors
  4. Footnotes
  5. Figures

Repairing severe human skull injuries requires customized cranial implants. Traditionally, medical sculptors have employed their anatomical modeling expertise to sculpt prosthetic implants using clay and wax. However, even with the aid of automated manufacturing techniques, the design process remains expensive in terms of labor, materials, and money. Any modification to a sculpted implant requires fabricating a new model from scratch. Techniques developed at the University of Illinois at Chicago (UIC) in 1996 have greatly improved this practice. Virtual reality research now aims to augment these tools and methods within a networked digital medium. Figure

Closing large cranial defects offers patients therapeutic benefits, including restoring the shape of the head, protecting vital brain tissue, minimizing pain, reducing operating and recovery time, and in some cases even improving cognitive abilities. Unfortunately, many factors limit cranial implant availability. First among them, insurance companies often do not support this form of reconstruction due to the high cost of labor and manufacturing. Second, because only neurosurgeons and medical modelers have the specialized anatomical and technical knowledge to do the work, assembling the necessary expertise is difficult; travel expenses for both patients and specialists further increase the overall cost. Finally, the process of solidifying implant material exudes extreme temperatures. Exposing the brain to these temperatures damages tissue, so presurgical implant design is often critical to reducing patient risk. Traditional cranial implant fabrication and surgical placement methods depend on subjective skills and procedures.

Fady Charbel and Ray Evenhouse and their team at UIC pioneered a presurgical cranial implant design technique in 1996 [4] that develops custom-fit cranial implants prior to surgery using the patient’s computed tomography (CT) data (see Figure 1). The medical sculptor loads the CT data into medical computer-aided design (CAD) software and produces a digital model of the defect area. The polygonal model is exported to a rapid prototyping stereolithography unit. This computer-aided manufacturing (CAM) process fabricates a physical defect model. A sculptor shapes, molds, and casts the implant based on the model. Shaping the clay, the sculptor progressively sculpts the implant’s mold, then casts the implant by filling the mold with a medical-grade polymer. After casting, the implant is sterilized and prepared for surgery. To date, nine patients have received implants made using this method.

Several of its steps are, however, still labor-intensive and costly. Rapid prototyping equipment requires significant material fabrication time. Though trained specialists create the implants, the process depends on subjective skills and procedures, often producing a close but imperfect fit necessitating revisions during surgical placement. A separation of responsibilities exists between the sculptor designing the implant and the doctor performing the surgery, and assembling these specialists for consultation and evaluation can be difficult.


The medical sculptor must be able to feel the models, as well as view them.


Translating the real-world tools to VR equivalents can potentially improve today’s presurgical methods. Provided that interaction parallels the medical sculptor’s traditional techniques, the digital design and evaluation processes could benefit sculptors, neurosurgeons, and manufacturers alike. Starting with a 3D model based on patient CT data, sculptors interact with the graphics superimposed over their hands. Augmenting this form of immersive visualization with force feedback allows them to feel surfaces as if they were working with real objects. Since the implant model is now stored and manipulated in a digital format, remote users can collaboratively view and discuss the model over high-speed networks. The neurosurgeon, the manufacturer, and even the patient observe implant development and provide feedback during the design process. This networked collaboration gives the neurosurgeon a more active role during planning, while reducing the travel time and expenses for everyone. Exporting the sculpted implant model directly from the virtual environment promises the use of new, state-of-the-art implant materials, as the manufacturer can receive the 3D model over the network. We anticipate that utilizing this network VR system will minimize the consultation times and costs compared to traditional methods to the patient, surgeon, and implant manufacturer.

Immersive visualization systems, which are central to this research, support stereo vision, further helping the participants understand the depth relationships in the complex cranial model. Head tracking enables intuitive head movement, as the graphics are presented from the viewer’s perspective. Calibration ensures the graphics maintain an absolute scale. Rather than looking at improperly scaled flat images on a computer monitor, the medical sculptor perceives 3D virtual objects at their correct scale.

Large displays contribute a sense of presence within the immersive environment, but the medical sculptor must be able to feel the models, as well as view them. Introducing force feedback adds the sense of touch, an essential feature for 3D modeling [7]. The PHANToM haptic device developed by SensAble Technologies, Inc., generates force feedback to simulate a sense of touch [9]. These forces allow the medical sculptor to feel surfaces of the virtual defect and implant models. Stereo vision, a vision-filling field-of-view, viewer-centered perspective, and interactivity, combined with the sense of touch supplied by the PHANToM haptic device, create a rich sensory environment for the development of cranial implants. Together, they let medical sculptors work in the virtual environment just as they would with physical models in the real world.

Connecting these immersive systems across high-speed networks enables teleimmersion, so users in different locations can collaborate among shared, virtual, and simulated environments [8]. Geographically scattered sculptors and surgeons can discuss implant design, surgical preplanning, and postoperative evaluation. Teleimmersion also provides educational opportunities, enabling instructors to interactively present modeling methods and techniques.

Previous work has explored both computer-aided cranial implant design and immersive modeling, but challenges have remained in unifying these concepts into an immersive cranial implant design application. Several previous methods successfully created cranial implants. One combines rapid prototyping and commercial modeling software [10]. Another utilizes skull symmetry and numerical analysis to provide solutions in certain cases [2]. UIC uses medical-modeling software to build the defect, but other projects use these tools to design the implant. In either case, the tools require complex interaction techniques. Moreover, users (such as modelers and sculptors) operate only within a 2D window. If anatomical training involves sculpting with clay and wax, as at UIC, the virtual environment demands interaction with natural techniques, rather than flat 2D windows.

Modeling within an immersive environment involves special challenges. Programming tools often make it possible to create simple geometric shapes (such as cubes, spheres, and cylinders). More effort is required for 3D-volume sculpting the equivalent of clay [3]. However, spatial input creates problems, since physical constraints usually don’t limit user input movements. Force feedback addresses this weakness. Commercial products, including SensAble’s FreeForm, have been used for medical modeling [6] and provide significant added value for medical sculptors compared to traditional CAD software. Integrating improved modeling tools with collaborative immersive environments may yield additional benefits to patients, sculptors, and surgeons.

Immersive displays can portray convincing 3D representations, but such environments are strictly virtual. Real-world objects are not readily integrated into virtual environments (see the article by Lok in this section). Augmented reality systems combine real and virtual information, allow real-time interactivity, and manage 3D registration [1]. Medical sculptors’ training focuses on their hands, which hold tools they use to sculpt the models. The awkward control devices used by traditional VR systems are not conducive to intricate sculpting techniques, as users cannot see their hands and the virtual object at the same time.

Our proposed solution involves enhancing VR so sculptors can work in familiar ways. Medical sculptors must be able to see their hands, and a new prototype display system, called the Personal Augmented Reality Immersive System, or PARIS, we’ve been developing over the past several years fills this need. One of the first VR systems designed within VR, PARIS incorporates significant improvements over previous projection-based VR displays [5]. Previous systems, including the Cave Automatic Virtual Environment, or CAVE, and the ImmersaDesk, support 3D vision well, managing separate stereo images for each eye and tracking head motion. But two important depth-perception cues—occlusion and accommodation—are not supported correctly in these displays. For example, conventional projection-based VR displays project onto a screen behind the user’s hand. As a result, the hand occludes the virtual object. The half-silvered mirror in PARIS superimposes the displayed image over the sculptors’ hands, allowing the sculptors to see both objects at the same time. Accommodation refers to muscles controlling the eye to adjust focus and detail. In a conventional projection-based VR display the eye always focuses on the display screen, which is typically farther away than arm’s reach. PARIS is designed so the user’s hands and the virtual object are the same distance from the user’s point of view as the image on the screen.

These projection improvements are particularly important for sculpting. The medical sculptor’s hands, the constructed implant, and the patient’s data are all in the same work area in PARIS. Adding haptics to an augmented reality environment requires aligning the graphic and haptic coordinate systems [11]. This calibration compellingly merges the visual environment with the force feedback felt by the sculptor.

Sitting at PARIS, a medical sculptor interacts with the system in a manner similar to the methods pioneered at UIC in 1996 (see Figure 2). The first step involves scanning and saving patient and family photographs to the computer. The sculptor then converts the patient’s skull CT data into polygon geometry. The application loads all models and photographs, and the sculptor may then manipulate the skull to obtain the best view of the patient’s defect. A pencil tool creates 3D annotations and highlights the edge of the defect. This outline serves as input for calculating a defect model separate from the entire skull. The sculptor traces the defect edge interactively, so even irregularly shaped defect geometry may be extracted. Viewing the defect, the sculptor makes annotations indicating where on the skull bones to attach screws during the surgery. Feeling the surface of the defect allows the sculptor to determine how to sculpt the implant. Referring to the digitized photographs, the sculptor builds material into the defect model, gradually shaping the implant. When the work is complete, the application saves the model file to disk for evaluation and eventual fabrication.

Leveraging existing programming libraries allows software development to focus on the environment rather than on low-level implementation details. This system is being developed using the CAVE Library (CAVELib), Open Inventor, Kitware’s Visualization Toolkit (VTK), and SensAble Technologies’ General Haptics Open Software Toolkit (GHOST). Employing all these toolkits together maximizes their individual strengths. For example, CAVELib handles the head-tracked stereo view of the environment while ensuring compatibility with a large installed base of CAVE and ImmersaDesk VR systems. Open Inventor is a scene graph library optimized for interaction and extensibility. Its model format is human-readable and supported by many 3D modeling packages. VTK provides a comprehensive collection of visualization algorithms used to generate geometry. And GHOST provides the hardware communications for the PHANToM desktop device.

Traditionally, medical sculptors tape patient and family photographs to the wall over their desks for reference and as visual cues for guiding implant design. Rather than taping numerous photographs to their work area, sculptors using PARIS may instead load digital images directly into the virtual environment. Once loaded, these images may be moved freely throughout the environment. Transparent images may be positioned directly in relation to the CT data. Because it is not important for the user to feel the image planes, the images are manipulated using the tracked pointer rather than with the haptic device.


The graphics displayed on PARIS create the illusion of a 3D tool being held in the sculptor’s fingers.


While the photographs serve as auxiliary information, the patient’s skull geometry is calculated from CT data and loaded into the environment. The medical sculptor requires an anatomically accurate model on which to base planning and design. Preprocessing converts the patient’s skull CT data into Open Inventor geometry. Decimation, or decreasing the number of values, reduces the detail so geometric complexity does not impede interactivity within the virtual environment. When the application starts, it automatically converts the simplified skull model to GHOST for haptic rendering. A sculptor thus sees the Open Inventor model while feeling the surface of the GHOST representation. Holding the PHANToM stylus in hand, the sculptor feels the surface of the skull model.

As the sculptor moves the stylus, the application tracks its position and automatically overlays a drawing tool. The graphics displayed on PARIS create the illusion of a 3D tool being held in the sculptor’s fingers. The images change appearance depending on which tool is active. A 3D pencil enables drawing lines within the environment, allowing sculptors to draw and write freely while indicating areas of interest in the design process; for example, a sculptor might note where errors have occurred during the CT scanning process.

Switching to the defect tool, the sculptor indicates the boundary of the cranial defect. Defect specification is an important part of the planning process, as the defect shape determines how the implant will attach to the skull during surgery. Guided by haptic feedback, the sculptor traces along that edge of the bone to indicate the boundary of the defect. A semitransparent sphere indicates the volume included as part of the defect. Processing the traced edge and the skull model with VTK, the application extracts the defect geometry from the rest of the skull. Unlike costly stereolithography fabrication, which can take up to eight hours, extracting a virtual defect takes only minutes. The visual and haptic feedback is immediate, and the sculptor can refine the selection to include more of the skull’s form, then begin designing the implant.

Feeling the defect edges, the sculptor sweeps through the missing space to generate the prototype implant’s-virtual-geometry. VTK provides the volume-modeling capabilities used to approximate its shape. As the sculptor drags the PHANToM through space, VTK calculates a volume based on the tool’s path (see Figure 3). Our current research focuses on refining the implant-volume-sculpting process. Interaction should be as natural and fluid as possible for the medical sculptors, and networked collaboration should allow multiple users to provide feedback to them as they refine the implant model.

Further research over the next few years will streamline the implant-design process. Clusters of computers may allow partitioning the computational tasks across multiple machines. Networked consultation components could be applied to any area where medical professionals would benefit from a rich visualization environment combined with superior teleconferencing tools.

Neurosurgeons, radiologists, and paleontologists have all expressed interest in utilizing the system for consultation and planning cranial procedures, plastic surgery, facial reconstruction, and visualizing skull structures. Only with continued investigation can researchers determine the full scope of what this technology might bring to the cranial-implant design process.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Starting with a stereolithography model (top), traditional medical sculptors use clay and wax (middle) to design the implant form. The sculptor then casts the model to obtain the resulting implant (bottom).

F2 Figure 2. A medical sculptor “touches” a cranial defect while sitting at PARIS. The stylus corresponds to a tool held in the right hand, while a virtual hand tracks the position of the left hand. A real skull model remains visible on the table.

F3 Figure 3. General work flow within the virtual environment. The sculptor interactively specifies the defect, isolates it from the rest of the skull, then begins to cover it with a medical-grade polymer.

UF1 Figure. The virtual environment tracks hand positions and updates medical slice images corresponding to the touch location indicated by the green sphere. 3D billboards display auxiliary patient information in images that may be positioned and arranged within the workspace. (Patient information images provided by GE Medical Systems; overall image by Electronic Visualization Laboratory, University of Illinois at Chicago.)

Back to top

    1. Azuma, R. A survey of augmented reality. Presence: Teleoperators and Virtual Environments 6, 4 (Aug. 1997), 355–385.

    2. Burgert, O., Seifert, S., Salb, T., Gockel, T., Dillmann, R., Hassfeld, S., and Mühling, J. A VR-system supporting symmetry related cranio-maxillo-facial surgery. In Proceedings of Medicine Meets Virtual Reality (Newport Beach, CA). IOS Press, Amsterdam, The Netherlands, 2003, 33–35.

    3. Deisinger, J., Blach, R., Wesche, G., Breining, R., and Simon, A. Towards immersive modeling - Challenges and recommendations: A workshop analyzing the needs of designers. In Proceedings of the Eurographics Workshop on Virtual Environments (Amsterdam, The Netherlands, June, 2000.

    4. Dujovney, M., Evenhouse, R., Agner, C., Charbel, L., Sadler, L., and McConathy, D. Performed prosthesis from computer tomography data: Repair of large calvarial defects. In Calvarial and Dural Reconstruction, S. Rengachary and E. Benzel, Eds. American Association of Neurological Surgeons, Park Ridge, IL, 1999, 77–88.

    5. Johnson, A., Sandin, D., Dawe, G., DeFanti, T., Pape, D., Qiu, Z., Thongrong, S., and Plepys, D. Developing the PARIS: Using the CAVE to prototype a new VR display. In Proceedings of the ACM Symposium on Immersive Projection Technology (Ames, IA, 2000).

    6. Kling-Petersen, T. and Rydmark, M. Modeling and modification of medical 3D objects: The benefits of using a haptic modeling tool. In Proceedings of Medicine Meets Virtual Reality (Newport Beach, CA, 2000).

    7. Massie, T. A tangible goal for 3D modeling. IEEE Comput. Graph. Applic. (1998), 62–65.

    8. Park, K., Cho, Y., Krishnaprasad, N., Scharver, C., Lewis, M., and Leigh, J. CAVERNsoft G2: A toolkit for high-performance teleimmersive collaboration. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology (Seoul, Korea, 2000), 8–15.

    9. Salisbury, J. and Srinivasan, M. PHANToM-based haptic interaction with virtual graphics. IEEE Comput. Graph. Applic. 17 (1997), 6–10.

    10. Taha, F., Testelin, S., Lengele, B., and Boscherini, D. Case study: Modeling and design of a custom-made cranium implant for large skull reconstruction before a tumor removal. Materialise Medical, Leuven, Belgium, 2001; www.materialise.com/medical-rpmodels/ casestudies_all_ENG.html.

    11. Vallino, J. and Brown, C. Haptics in augmented reality. In Proceedings of the IEEE International Conference on Multimedia Computing and Systems (Florence, Italy, 1999), 91–95.

    The Virtual Reality in Medicine Laboratory at the University of Illinois at Chicago is funded in part by the National Library of Medicine/National Institutes of Health contract N01-LM-3-3507. The Electronic Visualization Laboratory at UIC receives major funding for VR equipment acquisition and development and advanced networking research from National Science Foundation awards EIA-9871058, EIA-9802090, EIA-0224306, and ANI-0129527, as well as from the NSF Partnerships for Advanced Computational Infrastructure cooperative agreement (ACI-9619019) with the National Computational Science Alliance. Additional funding is provided by a subcontract to the Office of Naval Research through its Technology Research, Education, and Commercialization Center. Previously, EVL received funding for PARIS from the U.S. Department of Energy ASCI VIEWS program. PARIS is a trademark of the Board of Trustees of the University of Illinois.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More