U.S. Department of Energy (DOE) researchers recently ran a series of tests to see whether the VisIt visualization application could extract scientific insight from massive datasets. Visualization researchers from Lawrence Berkeley National Laboratory (Berkeley Lab), Lawrence Livermore National Laboratory, and Oak Ridge National Laboratory (ORNL) ran the application using 8,000 to 32,000 processing cores to manage datasets ranging from 500 billion to 2 trillion grid points. The researchers confirmed that VisIt could leverage the growing population of cores powering the world's most advanced supercomputers to address problems of unprecedented proportions.
To run these tests, the researchers began with astrophysics simulation data, and then expanded it to generate a sample scientific dataset at the desired dimensions. This strategy was chosen because the data sizes reflect future problem sizes, and because the main goal of the experiments is to better comprehend the problems and limitations that might be confronted at extreme levels of concurrency and data size. "These results are the largest-ever problem sizes and the largest degree of concurrency ever attempted within the DOE visualization research community," says Berkeley Lab's E. Wes Bethel.
ORNL researcher Sean Ahern says the degree of grid resolution created for the experiments is expected to be prevalent in the near future. Another objective of the experiments was to ready the establishment of VisIT's credentials as a Joule code that has demonstrated scalability at a large number of cores. A series of such codes is being set up by DOE's Office of Advanced Scientific Computing Research to function as a metric for tracking code performance and scalability as supercomputers are built with extremely high numbers of processor cores.
From HPC Wire
View Full Article
Abstracts Copyright © 2009 Information Inc., Bethesda, Maryland, USA