Home → Magazine Archive → July 2002 (Vol. 45, No. 7) → Computer Games and Scientific Visualization → Full Text

Computer Games and Scientific Visualization

By Theresa-Marie Rhyne

Communications of the ACM, Vol. 45 No. 7, Pages 40-44
10.1145/514236.514261


Save PDF

Imagine an aviation meteorologist running climate and weather-forecasting computations, generating visualizations for airline pilots, on a Microsoft Xbox. Then imagine a pilot getting into a plane and flying a route covered by the weather report. Would you knowingly buy a ticket on the same flight? How about surgeons performing remote liver-transplant surgery via Sony PlayStation 2? Would you, as a patient, willingly hop onto the operating table? How much trust might a computational chemist have visually exploring the molecular structure of an anthrax virus on a Nintendo GameCube? Could a drug manufacturer use the information to design an effective FDA-approvable antidote? Could the future platform for genome sequencing for, say, delivering personalized medical evaluations be a wireless mobile phone enhanced with graphics acceleration and micro-haptic virtual interface?

Figure. Mixed-reality simulation from FlatWorld, merging cinematic stagecraft techniques, including "digital flats," or rear-projection walls, with physical props and stereoscopic graphics and immersive audio, developed by the University of Southern California's Institute for Creative Technologies (Tu Le, David Benjamin, and Jarrell Pair).

Figure. Scene from the DarkCon scenario, part of the Sensory Environments Evaluation project, which seeks knowledge retention through a virtual learning experience evoking an emotional response from trainees preparing for real-world peacekeeping and reconnaissance missions. (Institute for Creative Technologies, University of Southern California, in cooperation with the U.S. Army.)

How likely is it that scientifically reliable visualizations will ever be performed on computer game consoles or wireless PDAs? How likely is it that visualization engineers and mainstream scientists might one day use some video game animation rendering, navigation, or interface innovation to cut time and costs from their own development efforts?

Even if such scenarios and speculation seem farfetched today, especially if you happen to be a non-computer scientist like an astronomer, biologist, chemist, or physicist, game designers and software developers increasingly have much to teach the developers of graphics software used for scientifically reliable visualization images and animations.

In light of recent advances in game hardware, software, image rendering, and virtual environments, it really is not too difficult to appreciate the promise of these scenarios. For example, the Institute for Creative Technologies, a program begun in 1999 by the U.S. Department of Defense (U.S. Army Simulation, Training, and Instrumentation Command) to tap the resources and talents of the entertainment and game development industries, jointly operated with the University of Southern California, is developing combat video games to enhance the strategic, combat, and decision-making skills of next-generation military field commanders [5, 6]. One potentially commercial game result, called C-Force, could be released as soon as next year on popular consoles, including the Xbox and PlayStation 2, allowing armchair generals to test their instincts and judgment as squad leaders under challenging true-to-life situations in the field and under fire (see www.ict.usc.edu).

The U.S. Department of Energy funds visualization research on the use of cluster graphics systems incorporating standard commercial PCs with graphics cards from such vendors as ATI and nVidia. Many of these components were optimized to address consumer interest in computer games, rather than scientific computing requirements. However, clusters of PC nodes, many including 3D hardware acceleration based on the nVidia Linux drivers, are being constructed at supercomputing centers and government laboratories worldwide. As part of this effort, WireGL, a package developed at the Computer Graphics Laboratory at Stanford University, runs on off-the-shelf PCs connected to a high-speed network to facilitate the simultaneous rendering of time-varying scientific visualizations; WireGL is based on OpenGL, a de facto standard graphics programming language that originated and evolved at SGI (see graphics.stanford.edu/software/wiregl) [4]. (The Chromium Project is the next-generation open-source version of WireGL; see sourceforge.net/projects/chromium/.)

Scientific visualization researchers are redesigning and retrofitting their software environments for cluster computing. This means having to deal with PC hardware platforms and graphics acceleration optimized for computer games. The Java-based VisAD scientific visualization tool works on a cluster of Linux computers (see www.ssec.wisc.edu/~billh/visad.html) [2]. Researchers at the Council for the Central Laboratory of the Research Councils, Rutherford Appleton Laboratories in Oxfordshire, U.K., run VisAD on a Compaq Computer iPAQ pocket PC (see www.unidata.ucar.edu/staff/russ/visad/msg02588.html). Other scientific visualization tool developers are also porting their tools to cluster computers and even to mobile PDA environments. The market dynamics of computer game applications are thus influencing computer architectures historically associated with scientific visualization.

Theoretically, genome sequencing could be executed on Web-enabled mobile phones running software like the Basic Local Alignment Search Tool (BLAST), a set of search programs designed to explore all available sequence databases regardless of whether the query is protein or DNA (see www.ncbi.nlm.nih.gov/BLAST/blast_overview.html). BLAST was developed at the National Center for Biotechnology Information (part of the U.S. National Institutes of Health in Bethesda, MD), accessible by genomic researchers worldwide. In the future, this access could easily be performed via PDA or mobile phone.

SGI, a traditional graphics workstation vendor, recently launched initiatives reflecting the need for universal access to visualization capabilities from networked computing devices. For example, its OpenGL Vizserver, a software framework permitting delivery of compressed images of 3D data sets, aims to facilitate interaction among networked desktop PCs and visualization supercomputers or collaborative communities of networked devices from client-independent devices, even laptop computers. Eventually, PDAs and other thin clients will be included.

Back to Top

The Limits

Real-world performance and data reliability are serious concerns in scientific computing. Computer game developers do not necessarily share such concerns. Visualization researchers are especially aware that the short release cycles characterizing the computer game market often result in incomplete and even unstable graphics drivers. For scientific research, where projects are routinely conducted over many years, instability is more than an annoyance; it can lead to unreliable results and unsafe dependencies, potentially influencing scientists to produce or trust even dangerous conclusions. Scientific visualization requires programming via stable application programming interfaces (APIs), as well as interprocessor communication and accurate multivariate data. Still, computer games and visual simulations alike tend to be designed around the notion of challenging human performance through the rapid rendering of virtual worlds.

The data accuracy and scientific reliability of virtual physical interaction in these virtual worlds is suspended for the sake of producing entertainment value and a convincing experience. For example, a first-person shooter game like Quake is rarely concerned with the physical properties of its weapons, such as the temperature of the metal of a particular gun barrel or trigger mechanism. A scientific visualization, on the other hand, might focus exclusively on metal fatigue under certain extreme temperature conditions.

Meanwhile, game developers readily acknowledge they cut corners through polygon reduction of terrain data sets in order to generate interactive flybys of pseudo-geographic regions. Now imagine eliminating a few polygons in the digital terrain model representing the Grand Canyon, then giving the model to geologists looking for a particular formation. Shortcuts in the rendering software to produce a more engaging experience for the user might work well in a game, but geologists using the same digital terrain data in a visual simulation of fault structures are unlikely to trust what they're seeing or be able to apply it on a real-life scientific mission.

Clearly, computer-game-driven enhancements to major APIs like OpenGL and Microsoft's DirectX might alter or adversely affect scientific visualization requirements. Thus, both the scientific visualization and computer game developer communities need to improve their understanding of each other's needs. For scientific computing, the challenge is how to reach out to the game industry.

Back to Top

Looking Back

In 2001, the U.S. video- and computer-game market generated $9.4 billion in sales, eclipsing for the first time U.S. movie theater sales, which totaled $8.35 billon [8]. The enormous and growing market for computer games worldwide increasingly drives mass-market, as well as niche, graphics hardware and software development directions. When it comes to graphics applications, game-related developments increasingly influence their counterpart directions in the graphics workstation and visualization software markets.


Visualization researchers are especially aware that the short release cycles of the computer game market often result in incomplete and even unstable graphics drivers.


For many scientists who have long worked in high-end visual simulation, computational science, and visualization, looking to computer and video game software interfaces, navigation schemes, and plot lines, is an alien, even disturbing, way of thinking. Over the past 20 years, since the advent of the earliest computer graphics software and related industrial and military applications, companies like Evans & Sutherland, Hewlett-Packard, IBM, and SGI built workstations and developed APIs that purposely addressed flight simulation requirements for the most rigorous, data-dependent military applications and the visual exploration of complex scientific computations. OpenGL and other APIs evolved in this environment. These workstation platforms were expensive and almost always located in elite supercomputing centers and government laboratories.

An early indication of the shift from workstations to PCs in visual simulation and scientific visualization was the panel session at the 1996 SIGGRAPH conference in New Orleans called "Graphics PCs Will Put Workstation Graphics in the Smithsonian." The panel's organizer introduced the discussion by saying, "The panelists will argue whether this development [of inexpensive hardware for graphics applications] spells the end of graphics workstations as we have known them. Calligraphic displays were supplanted by raster frame buffers; workstations with internal graphics replaced minicomputers with frame buffers. Has the next transition arrived? Should we fight the tide or hail the conquerors?" [9]. The answer six years later is that the PC is the dominant platform.

Today, while SGI workstations are still prized possessions in many government research centers, many of them have indeed been replaced by PCs with ATI and nVidia graphics cards.

There has also been a shift in the prominence of scientific visualization, virtual reality, and visual simulation as major sources of high-end computer graphics innovation. Meanwhile, the video- and computer-game market has exploded, generating ever-increasing revenue. Games now represent the leading force in the market for interactive consumer graphics. Not surprisingly, the graphics hardware vendors tend to anticipate the needs of game developers first, expecting scientific visualization requirements to be addressed in the process.

Back to Top

Handhelds Next?

These trends suggest computer graphics in streaming media and wireless environments. Game developers today are pondering how to create titles for wireless and mobile devices (see the figure), transitioning from PCs and console platforms. Streaming audio with file formats like MP3 is pervasive in both the office and consumer markets on all sorts of handheld gadgets. Online advertisers and instructional designers increasingly recognize the importance of being able to communicate material that is accessible through and displayed on PDAs. Similar efforts are under way to deliver scientific-quality computer graphics imagery on PDAs and mobile phones.

Within the next few years, we are likely to see the retrofitting of visualization and virtual reality tools and software for computer game consoles. High-speed collaborative computing will address wireless connectivity and the rendering of computed imagery on small screens [3].

Visualization software developers will have vast new hardware territories to explore beyond traditional workstations and PCs in the form of game consoles and mobile handheld devices used by tens of millions. The spread of visualizations and scientifically reliable images beyond the lab and into our pockets is all part of the process of the visualization-in-scientific-computing community finding its place in a new computer graphics universe. That universe is, however, increasingly charted by developers of game consoles and wireless, handheld mobile computing devices.

Figure. Another scene from DarkCon, this one helping participants remember their missions via "memory briefings," here juxtaposing an Army sergeant with images of a war-torn area.

Back to Top

References

1. Altschul, S., Gish, W., Miller, W., Myers, E., and Lipman, D. Basic local alignment search tool. J. Molec. Biol. 215, 3 (Oct. 5, 1990), 403410.

2. Hibbard, W. VisAD: Connecting people to computations and people to people. Comput Graph. 32, 3 (Aug. 1998), 1012.

3. Hellerer, T., Feiner, S., and Pavlik, J. Situated documentaries: Embedding multimedia presentations in the real world. In Proceedings of the 3rd International Symposium on Wearable Computers (ISWC'99) (San Francisco, Oct. 1819), 7986.

4. Humphreys, G. and Hanrahan, P. A distributed graphics system for large tiled displays. In Proceedings of the IEEE Visualization Conference 1999 (San Francisco, Oct. 2429). IEEE Computer Society Press, Los Alamitos, CA, 1999, 215223.

5. Kumagai, J. Fighting in the streets. IEEE Spectrum 38, 2 (Feb. 2001), 6871.

6. Shachtman, N. New army soldiers: Game gamers. Wired News (Oct. 29, 2001).

7. SGI. SGI launches a new era for graphics: Visual area networking (Jan. 29, 2002); see www.sgi.com/newsroom/press_releases/2002/ january/van.html.

8. Tran, K. U.S. video-game industry posts record sales. Wall Street Journal (Feb. 7, 2002), B5.

9. Uselton, S., Cox, M., Deering, M., Torborg, J., and Akeley, K. Graphics PCs will put workstation graphics in the Smithsonian (panel statement). In Proceedings of SIGGRAPH'96 (New Orleans, LA, Aug. 49). ACM Press, New York, 1996, 506.

Back to Top

Author

Theresa-Marie Rhyne ([email protected]) is a multimedia/visualization expert in the Distance Education and Learning Technology Applications department at North Carolina State University, Raleigh.

Back to Top

Figures

UF1Figure. Mixed-reality simulation from FlatWorld, merging cinematic stagecraft techniques, including "digital flats," or rear-projection walls, with physical props and stereoscopic graphics and immersive audio, developed by the University of Southern California's Institute for Creative Technologies (Tu Le, David Benjamin, and Jarrell Pair).

UF2Figure. Scene from the DarkCon scenario, part of the Sensory Environments Evaluation project, which seeks knowledge retention through a virtual learning experience evoking an emotional response from trainees preparing for real-world peacekeeping and reconnaissance missions. (Institute for Creative Technologies, University of Southern California, in cooperation with the U.S. Army.)

UF3Figure. NDL Hurricane 3D graphics engine running a walk-through visualization of a virtual city center on a next-generation Sendo cell phone running Microsoft's Smartphone 2002 operating system (Numerical Design, Ltd., Chapel Hill, NC).

UF4Figure. Another scene from DarkCon, this one helping participants remember their missions via "memory briefings," here juxtaposing an Army sergeant with images of a war-torn area.

Back to top


©2002 ACM  0002-0782/02/0700  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2002 ACM, Inc.

0 Comments

No entries found