7 Comments

Summary:

At the SC 08 show that ends today in Austin, I was struck by how much the lines between supercomputing and corporate computing have blurred. The show even had a panel on high-performance computing and cloud computing! But after visiting with vendors of all types and […]

roadrunnerAt the SC 08 show that ends today in Austin, I was struck by how much the lines between supercomputing and corporate computing have blurred. The show even had a panel on high-performance computing and cloud computing! But after visiting with vendors of all types and sizes, I realized that since supercomputers can be built with commodity chips and networking gear, high-performance computing isn’t really about the hardware like it was back in the days of Cray. Today it’s all about the software.

Heck, IBM’s Roadrunner, currently the fastest supercomputer in the world, runs on AMD x86 chips and the Cell processor found in millions of PlayStation 3 gaming consoles. But it’s the software that integrates those two types of chips together that make the computer interesting. And software is what will enable HPC systems to keep moving out of the scientific niche into corporate offices and even into workstations for traders and researchers.

Reza Rooholamini, director of engineering at Dell, reinforced his boss’s keynote, in which Dell talked about the fourth wave of supercomputing. He pointed out that the next generation of supercomputers would rely most on manageability and other software features to attract customers. That will enable Dell to drive high-performance computing to the level of workstations and smaller professional nodes. “Our strategy from the inception…was how can we take this high-end expensive technology and make it available,” Rooholamini says. “This fourth wave is a focus on manageability, scalability, high availability and tools automation.”

This sentiment was echoed by John Lee, V-P of the Advanced Technology Solutions Group with Appro, a company that builds and delivers custom-high performance computers to customers ranging from Renault to Lawrence Livermore National Lab. Lee said the HPC market is attracting new customers who don’t have the experience or inclination to build and customize their own machines. When it comes to programming and operating HPC systems those corporate customers also lack the free labor provided by students who work at labs or universities, meaning the software and services piece of the equation is more important.

“Instead of a government lab where they understand the bleeding edge, now we’re talking to financial institutions and gas and oil guys who know they are behind the curve and so they rely on the vendors to make sure it will run fine,” Lee says.

So while there will always be niche players such as SiCortex, which is building custom semiconductors for the HPC set, it’s far more likely that the key to growing the market for these systems will be software — a fact underscored by Microsoft’s entry into the space in 2005 and bolstered by the software giant’s push into a desktop supercomputer offered by Cray. “Thirty-three years ago people asked Bill Gates ‘Why are you getting into computers?'” said Jeff Weirer, a senior product manager at Microsoft. “At that time Bill Gates had a vision of a PC on every desk and this is really just the evolution of that vision.”

As HPC moves downstream, plenty of vendors are lining up to make supercomputing look pretty much like personal or corporate computing. Since few people could really define a supercomputer outside of the types of jobs it does, those vendors appear to be succeeding.

  1. [...] GigaOm makes a good point that SuperComputing is more and more about software.  Everyone knows how to build large clusters these days.  The software and infrastructure to tie them together to solve meaningful problems is the critical piece.  We’re working hard to solve real problems that are difficult on traditional infrastructures. [...]

    Share
  2. Very interesting read. How DO we define a supercomputer, anyway. I mean even retailers are getting to the point where they are selling duo-core processors to the tune of the latest and really fast Intel and AMD x85 processors, which is what the roadrunner uses! I cant tell the difference either. I guess thats the point of the article. Thanks for the info.

    Share
  3. typo correction: x86

    Share
  4. Yes you need super software to gear up these tens of thousands of CPU – to me it just like you need to collaborate a big team, actually a huge team. Things can easily get very complex. By the way, I’d love to see some open source work on this front.

    Share
  5. [...] supercomputer) and the efforts they are making toward cloud-based supercomputing. The lines between scalable corporate computing and high performance computing have blurred, but Simon’s post goes into detail about bringing virtualization to these [...]

    Share
  6. [...] being proprietary systems to open systems built using commodity hardware and open-source software. Supercomputers are now defined by their jobs, not their hardware. While processors such as IBM’s Cell chip and Nvidia’s graphics [...]

    Share

Comments have been disabled for this post