Supercomputing: Now Less Super, More Computing


The last time the world got so excited about supercomputers was in 1996 when a machine built by Intel and Sandia National Labs called ASCI Red breached the 1-teraflop level. But Teraflops are so 20th century, for now we’re getting jazzed up about IBM’s $100 million Roadrunner computer, which recently broke the petaflop barrier to become the fastest supercomputer…ever.

Something about big, round numbers excites the computing world and a petaflop, which is a measure of how fast a computer can complete an operation, is pretty big and round. The technology industry’s excitement around Roadrunner and ASCI Red is understandable — they both signaled a big shift in supercomputing — from its core technologies to the tasks was supposed to do.

ASCI Red, with its multi-core x86 processors, signaled the future of the supercomputing industry as the computers moved away from a glamorous assortment of specialty-built processors crammed into custom cabinets running Unix. Almost a decade earlier it was a Cray computer in 1988 that beat the 1-gigaflop record.

As a general rule, supercomputers increase in performance 1,000 times every decade, although factors on the software side may limit growth in years to come.

Short History of Modern SuperComputer

Cray’s first supercomputer, which marked the beginning of the industry as we know it, was installed in 1976 and ran at 160 megaflops. It cost $8.8 million. Like IBM’s Roadrunner, it was installed at Los Alamos National Lab. It was the first to run on integrated circuits and was shaped like a “C” to keep the twisted pair connecting the processors from being too long and causing too much latency.

In 1982, Cray introduced the first multiprocessor architecture for supercomputers. The processing power in that first Cray is less than the several gigaflops most cheap PCs can run today.

That’s a common theme in supercomputing, as yesterday’s supercomputers become today’s cloud compute grids and the clusters of servers running a hedge fund’s algorithmic trading strategies.

The line between supercomputing, which was geared at solving scientific problems, and high-performance computing, which required bulk processing power and less refinement, has blurred. Many supercomputers have been lumped in with high performance computing, and because both can use commodity hardware and open-source software, their sky-high pricing has fallen.

Earlier this year, that led research firm IDC to shift its market data a bit and compress supercomputing and the high-performance computing systems costing more than $500,000 into the same category. That category, by the way, grew 24 percent last year to $3.2 billion. But since the rise of clustered computing back in 2002, most supercomputers have become less super and more like regular computers.

Unix lost out as an operating system around 2004 when more than half of the computers in the Top 500 list ran Linux. Instead of the closely linked twisted pair of Cray’s system, today’s supercomputers use Infiniband. Some of them can still costs millions to build, but when all is said and done, most are built on x86 processors running Linux.

Cell Side Computing & Beyond x86

The rise of the x86 architecture is one of the reasons Cray formed a partnership with Intel. There was a desire for a second source of chips after AMD’s production delays caused Cray a financial hiccup, but also a realization by Intel that supercomputing was now a growing market dominated by its processors.

In Nov. 2007, 71 percent of the Top 500 supercomputers contained Intel chips. Ten years ago that number was 2.6 percent, and five years ago it was 11 percent. The x86 trend and democratization of supercomputing is also a boon to makers of HPC systems, such as Rackable and Appro.

But the Crays of the world may not stand on ceremony for Intel or AMD very long. IBM’s newly launched Roadrunner, the fastest supercomputer working today (supercomputers have shorter heydays than a viral video star), runs on a combination of AMD’s x86 and IBM’s Cell processors connected using Infiniband.

One of the reasons Roadrunner is so unique is that IBM had to develop special software that would work with both types of processors. The Cell architecture was designed for the PlayStation and is now morphing into a performance chip for other applications, a process that IBM is likely to follow in the future.

Supercomputers comprised of specialized processors are an emerging trend in the high-performance computing world, with players such as Nvidia bragging about its ability to crunch scientific data faster than general purpose CPUs.

So far, a Nvidia-powered supercomputer hasn’t broken the Top 500, but Steve Conway, IDC research vice president for HPC, says such different architectures might become more important in the next few years. If it does, it will be worth watching, because trends in supercomputing generally trickle downstream to the rest of the computer-using population eventually.

photos courtesy of IBM and Cray



Its hard to believe that machines with such enormous speed can believe. But there are things happening in seconds that we never ever dreamed of. Hoping such things can help us to evolve more.

Stacey Higginbotham

Breki, Nima, the software issue has not gone unnoticed. We’ve got something coming up on that topic soon.


Yeah, I was going to say – with processors this powerful and computers this fast, won’t the actual software start limiting functionality?

Jeff Stemler

A virtualized HPC solution for building supercomputer capability with industry standard x86 systems, the ScaleMP vSMP Foundation solution currently supports 128 core and 1TB memory in a single system image.


Don’t underestimate the software guys, functional languages and programming paradigms that allow for highly concurrent operation are experiencing a rebirth in all different sectors of the software ecosystem and its already begun to spill over into ‘mainstream’ languages & frameworks. All those brains over time usually end up innovating.

Comments are closed.