6 Comments

Summary:

Getting to next generation systems in high performance computing has inspired technologies that we now use everyday in data centers, but as the drive for exascale computing continues, it seems ingenuity is coming to an end. But is power consumption the real hurdle for bigger systems?

The first petaflop supercomputer, IBM's Roadrunner.

The first petaflop supercomputer, IBM's Roadrunner.

The quest to develop next-generation systems in high-performance computing has inspired technologies such as InfiniBand and parallel processing that have made their ways into data centers, but as the drive for exascale computing continues, it seems ingenuity is coming to an end. The government sees power consumption as the biggest problem and cost associated with exascale HPC (that’s a billion billion calculations per second) but Andrew Jones, writing at HPCwire, argues, that power isn’t the primary problem, programming is.

Power is a problem for exascale computing, and with current budget expectations is probably the biggest technical challenge for the hardware. Demonstrating the value of increased investment in supercomputing to funders and the public/media is probably an urgent challenge, too. But the top roadblock for achieving the hugely beneficial potential output from exascale computing is software. There are many challenges to do with the software ecosystem that will take years, lots of skilled workers, and sustained/predictable investment to solve.

I’ve seen this debate play out in the comments here at GigaOM on stories like this one, and find myself wondering if we have indeed relied on the “easy” fix of Moore’s Law to carry us forward in terms of performance. But now, as we’re reaching the end of that road in terms of manufacturing chips as well as power consumption, the hardware industry is trying to deliver new forms of silicon such as those based on memristors or some designed after the brain.

But before we talk about a wholesale shift in hardware platforms, Jones, from Numerical Algorithms Group, asks us to consider software. Parallel programming is still in its early days in terms of harnessing the massive compute available in a supercomputer, and Jones argues that figuring out solutions to just-identified problems associated with exascale computing will take large teams of experts and long-term investment.

I’d also argue that it needs to make the HPC industry attractive to the folks who are excited by solving these types of problems, but who might be currently creating startups or working for webscale companies wrestling with similar problems in different areas. Perhaps bringing some of these new, software-savvy minds into the HPC space might help spark the programming innovation that Jones thinks we need.

You’re subscribed! If you like, you can update your settings

  1. Terry Stratoudakis Friday, September 2, 2011

    Specialized hardware like FPGAs will also help HPC

  2. For as long as I have been working with computers (a long time), software has always lagged WAY behind the hardware. It’s always been easier to throw hardware at a problem. When Moore’s law finally runs out or even slows down, then software might actually start catching up.

  3. The author presents (software development for) parallel computing as a new ‘next generation’ challenge. That is plainly wrong. Amdahl’s famous article “Validity of the Single Processor Approach to Achieving Large-Scale Computing Capabilities” was published in 1967. So that’s how long ago people were already discussing these challenges.

    1. i appreciate with victor

  4. Also, use of GPUs will help but only for certain types of HPC apps where vector processing is useful.

  5. THIS IS VERY ENERGETIC INFORMATION FOR NEW GENERATION,I REQUEST TO NEW COMER THAT PLEASE THOUROUGH ALL THIS THINGS..

Comments have been disabled for this post