16 Comments

Summary:

Supercomputers these days are compute monsters. IBM’s latest, the Roadrunner, packs the power of 100,000 laptops stacked 1.5 miles high, embraces a unique mix of IBM’s Cell processor and ubiquitous x86 chips from AMD, and has the ability to calculate 1,000 trillion operations every second. Of course, trends in supercomputing generally trickle downstream to the rest of the computer-using population eventually. Continue Reading.

The last time the world got so excited about supercomputers was in 1996 when a machine built by Intel and Sandia National Labs called ASCI Red breached the 1-teraflop level. But Teraflops are so 20th century, for now we’re getting jazzed up about IBM’s $100 million Roadrunner computer, which recently broke the petaflop barrier to become the fastest supercomputer…ever.

Something about big, round numbers excites the computing world and a petaflop, which is a measure of how fast a computer can complete an operation, is pretty big and round. The technology industry’s excitement around Roadrunner and ASCI Red is understandable — they both signaled a big shift in supercomputing — from its core technologies to the tasks was supposed to do.

ASCI Red, with its multi-core x86 processors, signaled the future of the supercomputing industry as the computers moved away from a glamorous assortment of specialty-built processors crammed into custom cabinets running Unix. Almost a decade earlier it was a Cray computer in 1988 that beat the 1-gigaflop record.

As a general rule, supercomputers increase in performance 1,000 times every decade, although factors on the software side may limit growth in years to come.

Short History of Modern SuperComputer

Cray’s first supercomputer, which marked the beginning of the industry as we know it, was installed in 1976 and ran at 160 megaflops. It cost $8.8 million. Like IBM’s Roadrunner, it was installed at Los Alamos National Lab. It was the first to run on integrated circuits and was shaped like a “C” to keep the twisted pair connecting the processors from being too long and causing too much latency.

In 1982, Cray introduced the first multiprocessor architecture for supercomputers. The processing power in that first Cray is less than the several gigaflops most cheap PCs can run today.

That’s a common theme in supercomputing, as yesterday’s supercomputers become today’s cloud compute grids and the clusters of servers running a hedge fund’s algorithmic trading strategies.

The line between supercomputing, which was geared at solving scientific problems, and high-performance computing, which required bulk processing power and less refinement, has blurred. Many supercomputers have been lumped in with high performance computing, and because both can use commodity hardware and open-source software, their sky-high pricing has fallen.

Earlier this year, that led research firm IDC to shift its market data a bit and compress supercomputing and the high-performance computing systems costing more than $500,000 into the same category. That category, by the way, grew 24 percent last year to $3.2 billion. But since the rise of clustered computing back in 2002, most supercomputers have become less super and more like regular computers.

Unix lost out as an operating system around 2004 when more than half of the computers in the Top 500 list ran Linux. Instead of the closely linked twisted pair of Cray’s system, today’s supercomputers use Infiniband. Some of them can still costs millions to build, but when all is said and done, most are built on x86 processors running Linux.

Cell Side Computing & Beyond x86

The rise of the x86 architecture is one of the reasons Cray formed a partnership with Intel. There was a desire for a second source of chips after AMD’s production delays caused Cray a financial hiccup, but also a realization by Intel that supercomputing was now a growing market dominated by its processors.

In Nov. 2007, 71 percent of the Top 500 supercomputers contained Intel chips. Ten years ago that number was 2.6 percent, and five years ago it was 11 percent. The x86 trend and democratization of supercomputing is also a boon to makers of HPC systems, such as Rackable and Appro.

But the Crays of the world may not stand on ceremony for Intel or AMD very long. IBM’s newly launched Roadrunner, the fastest supercomputer working today (supercomputers have shorter heydays than a viral video star), runs on a combination of AMD’s x86 and IBM’s Cell processors connected using Infiniband.

One of the reasons Roadrunner is so unique is that IBM had to develop special software that would work with both types of processors. The Cell architecture was designed for the PlayStation and is now morphing into a performance chip for other applications, a process that IBM is likely to follow in the future.

Supercomputers comprised of specialized processors are an emerging trend in the high-performance computing world, with players such as Nvidia bragging about its ability to crunch scientific data faster than general purpose CPUs.

So far, a Nvidia-powered supercomputer hasn’t broken the Top 500, but Steve Conway, IDC research vice president for HPC, says such different architectures might become more important in the next few years. If it does, it will be worth watching, because trends in supercomputing generally trickle downstream to the rest of the computer-using population eventually.

photos courtesy of IBM and Cray

  1. The Roadrunner has officially broken the Petaflop mark, but Google Search might be running at several Petaflops for some time already. See link:

    http://tech-talk.biz/2008/06/09/ibm-supercomputer-sets-record-or-rather-not/

    Share
  2. Don’t underestimate the software guys, functional languages and programming paradigms that allow for highly concurrent operation are experiencing a rebirth in all different sectors of the software ecosystem and its already begun to spill over into ‘mainstream’ languages & frameworks. All those brains over time usually end up innovating.

    Share
  3. A virtualized HPC solution for building supercomputer capability with industry standard x86 systems, the ScaleMP vSMP Foundation solution currently supports 128 core and 1TB memory in a single system image.

    Share
  4. Yeah, I was going to say – with processors this powerful and computers this fast, won’t the actual software start limiting functionality?

    Share
  5. Stacey Higginbotham Tuesday, June 17, 2008

    Breki, Nima, the software issue has not gone unnoticed. We’ve got something coming up on that topic soon.

    Share
  6. [...] of the world’s fastest supercomputers as measured by the Top 500 nonprofit. This year it was IBM’s $100 million Roadrunner machine, which can reach speeds of 1 petaflop (about 1,000 trillion calculations per second). It [...]

    Share
  7. [...] has put out its twice-annual list of the fastest supercomputers, and there are few surprises. Roadrunner, IBM’s mammoth supercomputer that broke the petaflop record, holds the top spot. Big Blue is also the source of the lion’s [...]

    Share
  8. casualinfoguy Thursday, June 19, 2008

    Microsoft is doing an incredible job creating a general programming framework for implementing parallelized, multithreaded applications in their Concurrency and Coordination Runtime (CCR). I worked at Microsoft, but don’t take my word for it: http://channel9.msdn.com/shows/Going+Deep/Concurrency-and-Coordination-Runtime/ & http://msdn.microsoft.com/en-us/magazine/cc163556.aspx

    Share
  9. [...] The last time the world got so excited about supercomputers was in 1996 when a machine built by Intel and Sandia National Labs called ASCI Red breached the 1 teraflop level. Teraflops, are so 20th century for now we are all getting jazzed up about IBM’s $100 million Roadrunner computer that recently broke the Petaflop barrier to become the fastest supercomputer … ever. Stacey Higginbotham gives a short history of supercomputers and discusses about current Cell-based Petaflop supercomputer. Full Story [...]

    Share
  10. [...] which runs a new version of Microsoft Windows, is a testament to both the demand for and the democratization of computing power. Indeed, people who earlier might have turned to grids or supercomputers for their problems are [...]

    Share

Comments have been disabled for this post