5 Comments

Summary:

Supercomputers have long been used to predict how climate change will affect the Earth, but they use a lot of energy and generate a lot of heat in the process. I suppose it seemed a bit hypocritical to the guys at the Department of Energy’s Lawrence […]

Supercomputers have long been used to predict how climate change will affect the Earth, but they use a lot of energy and generate a lot of heat in the process. I suppose it seemed a bit hypocritical to the guys at the Department of Energy’s Lawrence Berkeley National Laboratory when they realized that the type of in-depth climate change model they wanted to build would result in a supercomputer requiring 200 megawatts to operate — enough energy to power a city of 100,000 residents.

Yet, a computer to measure cloud formation at a 1-kilometer scale could generate real breakthroughs in climatological understanding, even if it would need to be 1,000 times more powerful than existing computers and would cost about $1 billion to build. So the Berkeley Lab guys took the current supercomputing architecture — which essentially places a lot of x86 processors in a box — and dumped it for specially designed embedded processors that could be connected to create a more power-efficient supercomputer.

They teamed up with Tenscilica to design a supercomputer comprised of tens of thousands of specialized processors that (if all goes as planned) should cut the total power usage to 4 MW. Tensilica processors are customized and embedded in devices such as mobile phones, digital cameras, specialized routers and other applications that require a chip to do one thing. The singular purpose of these custom chips translates into power savings, but the are typically more expensive than the more all-purpose variety.

However, Chris Rowen, president and CEO of Tensilica, says the cost of building such a supercomputers should still be “orders of magnitude” less than the estimate $1 billion for an x86 system. Part of the reason is Tensilica should be able to cram a lot more processors on a chip and in a box, saving on hardware costs.

This is all at the research stage right now, but success could not only lead to better climate modeling while using less energy, but also more efficient data centers if specialized CPUs can boost speeds while cutting power costs.

You’re subscribed! If you like, you can update your settings

By Stacey Higginbotham

You're subscribed! If you like, you can update your settings

  1. Could Climate Change Lead to Computing Change? – GigaOM Tuesday, May 20, 2008

    [...] an energy-efficient (relatively) supercomputer that could run at speeds of up to 200 petaflops over at Earth2Tech. The Department of Energy’s Lawrence Berkeley National Laboratory has signed a partnership [...]

  2. Environmental Capital – WSJ.com : Green Ink: Another Day, Another Oil Record Wednesday, May 21, 2008

    [...] and power-hungry supercomputers, which contribute to what they’re fighting. Earth2Tech looks at a supercomputing revolution that could lead to cleaner data centers. Permalink | Trackback URL: [...]

  3. Supercomputers Get Eco-Aware « Earth2Tech Wednesday, June 18, 2008

    [...] at the Department of Energy’s Lawrence Berkeley National Lab are hoping to figure out a more energy efficient way to build a supercomputer using embedded [...]

  4. Meteorologists Cut to the Chase: It’s Time to Defend Against Climate Change « Earth2Tech Friday, August 22, 2008

    [...] supercomputer capacity for more detailed modeling and predictive [...]

  5. Cisco’s New Router Shows Need for New Processors – GigaOM Tuesday, November 11, 2008

    [...] purpose CPU or even a graphics processor that might also be used for the task. Scientists at the Lawrence Berkeley National Lab are even using the Tensilica cores to try to build an energy-efficient supercomputer. In a connected world where devices have to do [...]

Comments have been disabled for this post