5 Comments

Summary:

In data centers and on home PCs, the performance race has essentially stalled. Most people no longer focus on how fast a processor — or multiple processors — run, because processor performance isn’t as much of a bottleneck as it used to be. Unlike those running […]

Cray's Jaguar Supercomputer

Cray's Jaguar Supercomputer

In data centers and on home PCs, the performance race has essentially stalled. Most people no longer focus on how fast a processor — or multiple processors — run, because processor performance isn’t as much of a bottleneck as it used to be. Unlike those running web servers or desktop applications, however, those in the high-performance computing sector haven’t been as eager to give up on performance gains. Scientists and researchers are still trying to crunch huge data sets quickly and break the exaflop barrier.

But the emphasis on performance at all costs in the HPC sector may be changing, brought about by power concerns and the ability to run high-performance computing jobs on publicly available clouds such as Amazon’s EC2. While performance is still the key, in June last year the TOP500 list of the world’s most powerful supercomputers also began tracking how much performance those supercomputers generated per watt — a measure of energy efficiency. A month earlier researchers at the Department of Energy’s Lawrence Berkeley National Laboratory proposed building a new type of supercomputer to model climate change, one that would use less-powerful processors that consumed less energy.

Currently the average power efficiency of a TOP10 supercomputer is 280 megaflops/watt –- up from 228 megaflops/watt six months ago. Average power consumption of a TOP500 system is 386 kilowatts and average power efficiency is 150 megaflops/watt. We’ll see how these numbers change over time.

On the cloud side, I’ve heard Werner Vogels, the CTO of Amazon, tout EC2 as a fine place to run high-performance computing jobs. But I’ve also heard contrary opinions from the folks who actually run supercomputers. And thanks to John West over at InsideHPC, I read a blog post by Ian Foster, associate division director for Mathematics and Computer Science at Argonne National Laboratory, who posited that perhaps the less-than-stellar performance of a public cloud isn’t such a bad thing, since the researcher gets access to it right away. Foster writes:

However, before we conclude that EC2 is no good for science, I’d like to suggest that we consider the following question: what if I don’t care how fast my programs run, I simply want to run them as soon as possible? In that case, the relevant metric is not execution time but elapsed time from submission to the completion of execution. (In other words, the time that we must wait before execution starts becomes significant.)

So in the previous example, hardware speed is less important than how quickly one can access the hardware, a key advantage of external clouds. As companies evaluate moving to cloud computing, a lot has been written about how it changes the underlying economics of providing computing horsepower. But the idea of flexibility is an important one, especially since many companies seem to be pinning their hopes on the emergence of private clouds, which confer less economic advantages, but do offer agility.

  1. Sun offered a cloud for HPC quite a while ago under the Network.com brand. It was not good for anything else due to the lack of general or web computing facilities, e.g. Internet services like load-balancing or storage suitable for a database.

    One possible option for rapid deployment is cycle harvesting, i.e. running software on otherwise idle machines, specially at night, the way SETI@Home (BOINC) does.

    Share
  2. Processors are already disgustingly fast as it is; subsystems are where its at these days. Good article.

    Share
    1. I do lots of 3D animation for our products, processors are fast but are always too slow… depends on what field you are in. If all you are doing is word processing, you are right!

      Jon

      Share
  3. Yes reducing the power consumptionsis very important. I’m leaving in Haiti, in a country like that government fail to give power 24/7 so you have sometimes to generate your own power for business continuity. And I have an old and a new server, trust me I have to close the old one everyday because of its power consumptions.

    Share
  4. High Performance Computing Systems must be maintained in a right way and properly and effectively optimised to get more out of them.

    Share

Comments have been disabled for this post