13 Comments

Summary:

Heterogeneous computing, where hardware vendors mix a variety of processors (graphics processors, CPUs, embedded chips or DSPs) on a server to increase energy efficiency and processing speed, will become a reality in the data center in the next decade, says an IBM executive. Such arrangements increase […]

Heterogeneous computing, where hardware vendors mix a variety of processors (graphics processors, CPUs, embedded chips or DSPs) on a server to increase energy efficiency and processing speed, will become a reality in the data center in the next decade, says an IBM executive. Such arrangements increase complexity and can cause headaches for developers and customers, but cloud computing could alleviate some of those problems.

“The cloud could make heterogeneous computing possible,” says Tom Bradicich, fellow and VP of Systems and Technology Group at IBM, who spoke with me yesterday about the changes IBM is seeing in server design. Commodity boxes packed with CPUs are less compelling, and hybrid computing is on the rise (although hybrid servers are still very small number of systems today). Packing servers with multiple CPUs is like throwing a team of day laborers at building a house. They’ll get the job done, but there’s likely a better way to divvy up the job among a smaller number of experts, who can do it with less wasted time and energy.

IBM makes one of these “expert” chips, called the Cell, which it’s pushing into data centers and high-performance computing. Other chip vendors such as Texas Instruments or Tensilica, which are pushing DSPs for specialty computing, and Nvidia, which is pushing GPUs would agree. Even data center operators are experimenting with different CPUs for different tasks, to create a custom workflows that save energy.

The downsides of such customized machines are the upfront price and the difficulty of tweaking software to run on them. However, with the cost of energy becoming such a growing concern among data center operators, paying more up front for an energy-efficient server has become more acceptable. The software troubles can be offset by closely working with chipmakers that are offering software development kits or tools. And, one day, Bradicich thinks the cloud will help.

Bradicich says he is working on a program that would let users define their workloads; the program would then offer up an optimized arrangement of processors for the job. The next step for a technology vendor is to ensure that software could then provision the most appropriate hardware automatically in the cloud. The other necessary item for clouds to enable seamless hybrid computing, is a layer of software built on top of the cloud so the user can run an application without worrying if the code is designed to run on the underlying processors.

You’re subscribed! If you like, you can update your settings

  1. Ken Oestreich Friday, March 13, 2009

    I absolutely agree.

    Cloud providers (more accurately, Infrastructure-as-a-Service providers) will want to – and need to – differentiate. While Amazon’s EC2 is a “generic” x86 example, some providers will offer specific CPUs. e.g. SPARC, RISC, etc. etc. and be able to charge for it. This is because certain users will demand that their code run on specific platforms but not others. Implicit here is that some clouds will *not* have visualization in them. Again, another form of differentiation.

    BTW, I wonder if Sun, in their soon-to-be-announced cloud 3.0, will offer a “Solaris” and or a “SPARC” cloud, as opposed to an undifferentiated one.

    It will be interesting if, in an SLA API, users can define specific hardware for specific workloads. Lots of possibilities there….

  2. I hate to differ, but I don’t think this will ever happen.
    Generic hardware beats dedicated hardware almost everyday.
    See http://ophir.wordpress.com/2008/09/21/hardware-software-and-virtual-appliances-myths/ for some common myths.
    I just saw the new IBM blade server which has Cell CPU in it.
    Thy want an extra 20K$ for almost the same chips you get on Xbox for $300.
    And you need to rewrite the software.
    I doubt that the cloud will help them or that anyone really wants to run SPARC these days. Why would anyone want to do it ?
    There might be a future for dedicated algorithm services , but I hardly think they would run on dedicated chips.

  3. Who do you go to for custom applications? | TechBurgh Blog and PodCast Saturday, March 14, 2009

    [...] Hybrid Computers Will Hide in the Cloud (gigaom.com) [...]

  4. Hybrid Computers Will Hide in the Cloud | Digital Asset Management Monday, March 16, 2009

    [...] Continues @ http://gigaom.com/2009/03/13/hybrid-computers-will-hide-in-the-cloud/ [...]

  5. The Hunt for a Universal Compiler Gets $16M Tuesday, April 7, 2009

    [...] the Defense Advanced Research Projects Agency to develop a universal compiler that will run on heterogeneous hardware and multicore platforms, which are found in everything from supercomputers to embedded systems, [...]

  6. Is Microsoft Turning Away From Commodity Servers? Thursday, April 9, 2009

    [...] for better application performance without expending as many watts, they are experimenting with different kinds of processors that may be better-suited to a particular task, such as using graphics processors for Monte Carlo [...]

  7. Microsoft and Proprietary Chips : Beyond Search Friday, April 10, 2009

    [...] for better application performance without expending as many watts, they are experimenting with different kinds of processors that may be better-suited to a particular task, such as using graphics processors for Monte Carlo [...]

  8. The Cloud Makes Computers Truly Cheap and Truly Personal Monday, April 13, 2009

    [...] cloud, then the underlying hardware becomes less relevant. This holds true on the client side and in the server world as well, which means we may see the x86 architecture and Intel’s tremendous power begin to [...]

  9. Tom Bradicich Thursday, April 16, 2009

    Stacey, nice post… As we recently discussed, many of the issues plaguing our society today could be both better understood and then managed, by the act of proactively collecting, analyzing, and acting on exisiting and real-time data.

    The problems are vast and affect us all – the cost of congestion across the U.S. transportation systems nears $200 billion a year; the healthcare system loses over a billion a year to fraud. At the historic rate of commodity x86 architectures, it won’t provide enough compute power to solve these massive and growing problems. Hence hybrid approach, or a “fit for purpose” approach is needed, which in my view contains some combination or permutation of these components:

    – General purpose systems (e.g. x86)
    – High speed interconnect acceleration
    – Application acceleration
    – High performance algorithm acceleration

    Cloud computing is just one of the enablers that could make this all possible. Keep up the good work; I look forward to reading more of your posts!

    Tom Bradicich, IBM Fellow and VP, Systems Technology, IBM
    Twitter.com/DrEckz

  10. IBM Tries to Sell Enterprises on Workload-Specific Clouds Monday, June 15, 2009

    [...] Kloeckner: We make the hardware selections based on the workloads you want to run, and we optimize the workload for you. But because it is in the cloud, in terms of what do you see as a client as to how each different cloud behaves, it’s all entirely consistent. [...]

Comments have been disabled for this post