On the Block: SiCortex's DeLorean-Style Green Supercomputer


high_capability_system_sc5832SiCortex, a company that makes a green supercomputer using proprietary chips and some “Back to the Future” styling, is seeking to sell its assets by the end of June. (Check out what’s for sale here.) According to a story at HPCwire, SiCortex was seeking a third round of financing (it finalized a $37 million round last September), but one of its five venture backers pulled out. I called SiCortex CEO Chris Stone to get more information, but have not heard back.

Reportedly, the 5-year-old company was doing well, but in these hard economic times, it’s possible that a cash-strapped investor just couldn’t front SiCortex the money to continue. EETimes reported a similar capital crunch leading to the closure of video processing chip firm Ambric last November. However, there may be an industry trend working against SiCortex as well. In general, supercomputers have moved from being proprietary systems to open systems built using commodity hardware and open-source software. Supercomputers are now defined by their jobs, not their hardware. While processors such as IBM’s Cell (s ibm) chip and Nvidia’s (s nvda) graphics chips are being used to augment the x86 CPUs in some HPC systems, for the most part, specialty chips are a dying breed. For example, earlier this year, SGI, which made proprietary machines for the HPC industry, filed for bankruptcy and sold its assets to Rackable Systems (which has changed its name to SGI). So I wonder, is SiCortex’s lack of money a sign of a venture capital problem or a supercomputing industry problem?

John West over at insideHPC, tells me that he thinks the large upfront investment in SiCortex’s hardware that it needed to recoup was what ended up hurting it. He emailed that the company had sold 80 machines since launching its computers in 2007, and had a sales pipeline “tens of millions of dollars deep,” but wasn’t profitable. So it simply may have run out of cash.  In that case, its failure may be a sign of both the venture industry’s reluctance to invest in capital-intensive businesses, and the difficulties facing a specialty hardware company today.

Below I’ve embedded an old video interview featuring the SiCortex personal supercomputer. Unfortunately, I didn’t show its main product, which features a rack of machines that can be accessed through a DeLorean-style door that lifts up, rather than opens out. That, and the low power consumption, are pretty neat.


Herb Schultz - IBM Deep Computing

Although commodity clusters make up a larger and growing proportion of the number of systems deployed in the high-performance computing marketplace, the workload challenges facing the companies and organizations that do world-class innovation for a living are too complex to be managed appropriately with these “off the shelf” products.

For as long as electronic computation has been around, scientists and engineers have had to warehouse their most challenging problems, waiting for a system to come along that would be capable of handling them. That hasn’t changed; right now, leading laboratories, universities and R&D-based firms would like to refine models, run simulations, and analyze data streams, but are unable because the computational power required exceeds what is available to them by a factor of ten, or a hundred, or a thousand. Think about this: the fastest supercomputer on the most recently published list, an IBM system at LANL which uses both standard x86 and specialty Cell processors in a hybrid configuration, produces as many computations per second as the bottom 180 systems on the list. Specialty systems may make up a small fraction of the total systems delivered into HPC, but they are an outsized force when it comes to solving the world’s collective scientific problems.

Supercomputer vendors that seek to serve customers with the most challenging problems must make a long-term investment, and have a vivid imagination of what the requirements will be years in advance of delivery. IBM’s Blue Gene, the fastest, most energy efficient supercomputer when it was delivered in 2005 was initiated as a full-fledged project in 1999. It contained numerous innovations that could be called “specialty” components, yet it adhered closely to programming, administrative and IT lab standards so that customers’ investments in software and skills were protected.

The reason start-ups seeking to develop and market “specialty” supercomputers fail has less to do with the market turning its back on such offerings in favor of commodity clusters, and more to do with the enormous investment needed to get from specs to final product and a payback that takes years to recover, if ever.

Standardized components are absolutely crucial to the HPC market, for they have helped break down price barriers; and for most of the HPC market, “off the shelf” products are fine. But there will always be a segment of the market that will have problems that lean out ahead of wherever the state-of-the-art may be, and those customers will continue to depend on the vendors that can produce something special for them.


The company is doing well, but still ends up in this situation? I thought anything Green would go these days…

Matt Reilly

The assertion that “proprietary is out/commodity is in” suggests a misunderstanding of the landscape.

1. SiCortex system software, application libraries, and programming model were all industry standard: MPI, OpenMP, C, C++, Fortran95. They were also “open” as in “open source.” So the software wasn’t proprietary.

2. If “proprietary” means “not x86” then I think we’ve got a new definition of “proprietary.” How many new x86 licensees are there? None. If “proprietary” means “not ethernet” then you ignore most of the better infiniband implementations (that are specially designed for cluster use — e.g. infinipath) and the IBM federation switch, and the Cray interconnect.

(Note that at the time of SGI’s sale, they were manufacturing x86 / infiniband clusters and had been for a long time. Is that “proprietary” hardware? The irony is that if they’d fielded their shared memory x86 large SMP (a distinctly non-commodity solution) the fate of the company might have been much different. SGI failed because they had no product differentiation that could produce real profits.)

SiCortex failed because they ran out of money. Raising money is hard in the best of times. The founders (I was one of them) took on raising money in 2002 (in the second worst capital market in recent memory) as a full time job. Raising money required full time dedication, knocking on lots and lots of doors — at prospective investors as well as at customer prospects — a thorough understanding of the competitive situation, and a good bit of luck. That didn’t come together this time.

Don’t let the SiCortex collapse serve as a bogus argument against innovation. The big companies don’t have all the answers, and the answers they do have are often so overconstrained by inertia, internal internescine games, and quarterly results that innovation is reduced to a slogan.

Think big. Work hard. Build stuff.

Comments are closed.