News
Architecture and Hardware

Supercomputing’s Super Energy Needs, and What to Do About Them

Posted
A green supercomputer (actually, IBM's Blue Gene supercomputer tinted green).
Supercomputers, which increasingly require as much power as small cities, must get their energy requirements under control.

What does Tupelo, Mississippi have in common with the world’s largest supercomputer?

The answer has nothing to do with Elvis Presley, who was born in Tupelo, the state’s seventh-largest city with a population today approaching 36,000.

The correct response relates to electricity; specifically, 17.8 megawatts (MW) of power. That’s how much generating capacity it takes to run Tianhe-2, the 33.9-petaflop, 3.12-million processor, world-beating Intel machine at China’s National Supercomputer Center in Guangzhou, according to the Top500 ranking.

It is also roughly the amount of power required to supply electricity to Tupelo’s 13,501 households, using a U.S. Energy Information Administration yardstick of one megawatt for every 803 households. Bentonville, AK; Beloit, WI; and Spartanburg, SC, would all have similar requirements, going by U.S. Census Bureau figures.

The point: supercomputers and their whopping intelligence might be able to save the world (or destroy it, depending on one’s view), but as their performance continues to muscle up, their power requirements are equating to that of a small city.

As they head inexorably toward much greater processing power—the U.S. plans to operate three supercomputers that each exceed 100 petaflops by 2018, and to make a quantum leap to an exaflop (1,000 petaflops) by 2023—something has to be done. A 1,000-petaflop machine using Tianhe-2’s roughly 2-to-1 performance-to-power ratio (33.9 petaflops, 17.8 megawatts) would need to have 500 megawatts standing by—nearly the total output of an average-size U.S. coal plant, and enough electricity to power all the households in, say, San Francisco.

Given the parlous state of CO2-induced climate change, that would be too high an environmental price. So, too, would the operating costs: the rule of thumb says that each megawatt costs $1 million a year, so a 500-MW dependent computer would come with an annual electricity bill of $500 million. That is a deal killer, not to mention a figure that far outpaces the typical $200-million cost of the hardware.

So the industry has thrown down a gauntlet.

"We have to get this huge increase in computational power, with only a tiny increase in electric power," says John West, director of strategic initiatives for the Texas Advanced Computing Center (TACC) at the University of Texas in Austin.

It is a stark reality, and one not lost on the U.S. government, which has prescribed around 10 megawatts for each of the three 100-petaflop machines it wants to begin installing at three Department of Energy (DoE) national laboratories in 2017, according to University of Tennessee professor of electrical engineering Jack Dongarra, one of the compilers of the Top500 list. That is a performance-to-power ratio of 10-to-1, compared to Tianhe-2’s 2-to-1. DoE wants to raise the bar sky-high on 2023’s 1,000-petaflop machine, to 20MW, or a ratio of 50-to-1.

Luckily, the industry has made some impressive strides in energy efficiency recently.

The top three machines on the July installment of the biannual Green500 ranking of supercomputers by energy efficiency all shattered the previous best performance-to-power ratio, which the Green500 measures in gigaflops per watt.

All three were in Japan, built jointly by Japanese chipmaker PEZY and Canada’s Exascaler. The leading machine, called Shoubou and based at Japan’s Institute of Physical and Chemical Research (RIKEN), weighed in at 7.03 gigaflops per watt (a 7-to-1 ratio, compared to Tianhe2’s 2-to-1). Suiren Blue and Suiren, both at Japan’s High Energy Accelerator Research Organization (KEK), scored 6.84 and 6.22, respectively. The previous high was 5.27.

Green500 list founder Wu-chun Feng, professor of computer science and electrical and computer engineering at Virginia Polytechnic Institute and State University, credits a lot of the improvement in Japan to a PEZY processor advance. "They squeeze 1,024 CPU-like brains onto a little chip that only consumes about 80 watts of power," Feng notes.

However, all three were well down the Top500 list of outright performers released in June, where Shoubou with its 788,000 processors, Suiren Blue, and Suiren rank 160, 392, and 366, respectively.

Not long ago, the most energy-efficient computers were typically among the most powerful. That is no longer the case.

Watch this space. A number of technologies are hatching that could lead to the Promised Land of 20MW exascale computing.

While many ideas reside in new hardware designs, developers like Feng also have some clever software tricks up their sleeves.

Feng’s Virginia Tech team is working with Lawrence Livermore National Laboratory on software that would improve the coordination between the hundreds of thousands of processors in a supercomputer, a job that today burns a lot of energy as the processors flail away.

"We’re automatically trying to map the right task to the right processor at the right time," says Feng, who calls the system CoreTSAR (for Task-Size Adapting Runtime). "It’s just like what people do neurologically; they automatically map. We effectively want to mimic that."

Other groups working on similar software include the French Institute for Research in Computer Science and Automation (INRIA) and the Barcelona Supercomputing Center (BSC).

Feng also encourages hardware vendors to improve their "toolbox" of processors in which different processors serve different specified tasks, and all of which would work in a manner finely tuned by software such as CoreTSAR.

On the hardware side, Tennessee’s Dongarra is looking for big reductions in the energy required to move information out of a supercomputer’s solid-state memory and into processors. "Data movement is one of the most expensive things on these computers," Dongarra says. As vendors start to replace 2D memory chips located centimeters (a long distance in computing design) away from processors with 3D chips placed closer to the processing unit or "stacked" on top of it, then energy consumption will decline. Companies including Intel, Nvidia, and AMD are all working on 3D stacked memory.

Likewise, but further out in the future, optical conduits will replace copper and further slash energy consumption.

Some supercomputer makers are cutting energy consumption by wrapping circuit boards in oil, a process known as liquid immersion cooling, which slashes the amount of electricity required to power cooling fans. The three PEZY computers atop the Green500 list are all believed to use liquid immersion cooling.

"Air is a very poor way of removing heat, and oil is a much better way of doing it," notes Dongarra. "But it has some limitations. You have your machine soaking in oil, and if you have to replace something, you have to undo that; you’ve got a mess. You’ve got to protect things against fire hazards and all sorts of other things that compound it. If I have a $200-million machine, it will not be immersed in oil, I can guarantee that."

Whether oil or software or hardware, Texas’ West notes it is incumbent upon the industry to make energy strides.

"If we can’t keep the power requirements down to something economically reasonable, there will only be one or two (supercomputers) in the whole world, and they’d probably be devoted to weapons development," says West, who is confident the ongoing process will keep supercomputing power aimed at developing things beneficial to society, such as health and drug technologies, science leadership, cars, power sources, and much more. "The core motivation is that we have to continue to make computing better so that the world becomes a better place."

From Tupelo to Guangzhou, one might hope. 

Mark Halper is a freelance journalist based near Bristol, England. He covers everything from media moguls to subatomic particles.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More