Home → Opinion → Interviews → Exascale Computing: The View From Argonne → Full Text

Exascale Computing: The View From Argonne

By HPC Wire

June 22, 2012

[article image]

In an interview, U.S. Argonne National Laboratory directors Rick Stevens, Michael Papka, and Marc Snir contextualize the challenges and advantages of developing exascale supercomputing systems.

Snir stresses that building an exascale system by stitching together many petascale computers is impossible, and argues that exascale is needed to provide complex models to match hypothesis to evidence in increasingly complex systems. "As we transition to the exascale era the hierarchy of systems will largely remain intact, so the advances needed for exascale will influence petascale resources and so on down through the computing space," Papka says.

Snir anticipates a 10-year window for exascale system deployment at best, and he notes that Argonne "is heavily involved in exascale research, from architecture, through operating systems, runtime, storage, languages and libraries, to algorithms and application codes."

Papka says the U.S. Department of Energy exascale initiative opted for a development approach emphasizing co-design to ensure that the delivered exascale resources fulfill the requirements of the domain researchers and their applications. Stevens agrees that "we will not reach exascale in the near term without an aggressive co-design process that makes visible to the whole team the costs and benefits of each set of decisions on the architecture, software stack, and algorithms."

From HPC Wire
View Full Article

Abstracts Copyright © 2012 Information Inc. External Link, Bethesda, Maryland, USA


No entries found