Hybrid Computers Will Hide in the Cloud


Heterogeneous computing, where hardware vendors mix a variety of processors (graphics processors, CPUs, embedded chips or DSPs) on a server to increase energy efficiency and processing speed, will become a reality in the data center in the next decade, says an IBM (s IBM) executive. Such arrangements increase complexity and can cause headaches for developers and customers, but cloud computing could alleviate some of those problems.

“The cloud could make heterogeneous computing possible,” says Tom Bradicich, fellow and VP of Systems and Technology Group at IBM, who spoke with me yesterday about the changes IBM is seeing in server design. Commodity boxes packed with CPUs are less compelling, and hybrid computing is on the rise (although hybrid servers are still very small number of systems today). Packing servers with multiple CPUs is like throwing a team of day laborers at building a house. They’ll get the job done, but there’s likely a better way to divvy up the job among a smaller number of experts, who can do it with less wasted time and energy.

IBM makes one of these “expert” chips, called the Cell, which it’s pushing into data centers and high-performance computing. Other chip vendors such as Texas Instruments (s TXN) or Tensilica, which are pushing DSPs for specialty computing, and Nvidia (s NVDA), which is pushing GPUs would agree. Even data center operators are experimenting with different CPUs for different tasks, to create a custom workflows that save energy.

The downsides of such customized machines are the upfront price and the difficulty of tweaking software to run on them. However, with the cost of energy becoming such a growing concern among data center operators, paying more up front for an energy-efficient server has become more acceptable. The software troubles can be offset by closely working with chipmakers that are offering software development kits or tools. And, one day, Bradicich thinks the cloud will help.

Bradicich says he is working on a program that would let users define their workloads; the program would then offer up an optimized arrangement of processors for the job. The next step for a technology vendor is to ensure that software could then provision the most appropriate hardware automatically in the cloud. The other necessary item for clouds to enable seamless hybrid computing, is a layer of software built on top of the cloud so the user can run an application without worrying if the code is designed to run on the underlying processors.


Tom Bradicich

Stacey, nice post… As we recently discussed, many of the issues plaguing our society today could be both better understood and then managed, by the act of proactively collecting, analyzing, and acting on exisiting and real-time data.

The problems are vast and affect us all – the cost of congestion across the U.S. transportation systems nears $200 billion a year; the healthcare system loses over a billion a year to fraud. At the historic rate of commodity x86 architectures, it won’t provide enough compute power to solve these massive and growing problems. Hence hybrid approach, or a “fit for purpose” approach is needed, which in my view contains some combination or permutation of these components:

– General purpose systems (e.g. x86)
– High speed interconnect acceleration
– Application acceleration
– High performance algorithm acceleration

Cloud computing is just one of the enablers that could make this all possible. Keep up the good work; I look forward to reading more of your posts!

Tom Bradicich, IBM Fellow and VP, Systems Technology, IBM

Ophir Kra-Oz

I hate to differ, but I don’t think this will ever happen.
Generic hardware beats dedicated hardware almost everyday.
See http://ophir.wordpress.com/2008/09/21/hardware-software-and-virtual-appliances-myths/ for some common myths.
I just saw the new IBM blade server which has Cell CPU in it.
Thy want an extra 20K$ for almost the same chips you get on Xbox for $300.
And you need to rewrite the software.
I doubt that the cloud will help them or that anyone really wants to run SPARC these days. Why would anyone want to do it ?
There might be a future for dedicated algorithm services , but I hardly think they would run on dedicated chips.

Ken Oestreich

I absolutely agree.

Cloud providers (more accurately, Infrastructure-as-a-Service providers) will want to – and need to – differentiate. While Amazon’s EC2 is a “generic” x86 example, some providers will offer specific CPUs. e.g. SPARC, RISC, etc. etc. and be able to charge for it. This is because certain users will demand that their code run on specific platforms but not others. Implicit here is that some clouds will *not* have visualization in them. Again, another form of differentiation.

BTW, I wonder if Sun, in their soon-to-be-announced cloud 3.0, will offer a “Solaris” and or a “SPARC” cloud, as opposed to an undifferentiated one.

It will be interesting if, in an SLA API, users can define specific hardware for specific workloads. Lots of possibilities there….

Comments are closed.