6 Comments

Summary:

Intel will rethink the market for its Larrabee chip, once destined to be a graphics processor. Does its failure to make an x86-based GPU mean that it’s reaching the limits of x86 computing as we take our devices to extremes on the low and high end?

Technology has enabled us to bring more brains and connectivity to devices that range from digital picture frames to a server crunching scenes for the latest animated movie, but as we ask computers to take on more jobs, are we counting on a general-purpose architecture to deliver on a task that may best fit a chip? Intel may have run into this problem with its announcement today that it will issue a software development kit after last week delaying its own Larrabee graphics chip.

For Intel, the question becomes, how far can the x86 architecture stretch? Its Larrabee delay suggests that using x86 to develop a decent graphics processor may work, but it can’t compete against specialty GPUs. Intel owns the corporate and personal computing market, with AMD coming in as a distant second, but with the proliferation of mobile devices requiring lower-power silicon and the reliance on graphics processors to provide faster processing for our increasingly visual world, Intel has introduced Atom at the low end and focused on launching Larrabee for high-end graphics with both using the x86 instruction set.

There is a quote from Abraham Maslow that says, “If all you have is a hammer, then every problem looks like a nail.”  Intel with its x86 monopoly has seen the problem of low-power cell phone chips and high-performance graphics processors and decided that x86 makes a good nail. But chipmakers pushing the ARM architecture for the mobile market and the embedded space clearly disagree, while Nvidia, AMD and even IBM had fled the confines of x86 when it comes to delivering graphics.

So far the verdict is still out with regards to Intel’s success in unseating ARM with mobile and embedded vendors. And Intel’s delay with Larrabee reflected its inability to deliver comparable performance for the price when stacked up against the GPU vendors, said Jon Peddie, who tracks the GPU market. However, he’s less worried about the limits of x86 architecture. Peddie said Intel plans to take what it has learned from Larrabee and develop a coprocessor for the high-performance computing market, where accelerators in the form of GPUs and processors like IBM’s cell have gained traction.

However, Intel’s  Larrabee decision drives home worries that as the compute jobs fragment, we’re moving into a post-x86 world. If that’s true, should Intel also expand beyond x86 to ensure its growth, or should it make sure it owns the huge swath of middle ground where it’s hard to imagine x86 losing ground? “Intel has experimented with every competitive architecture that’s been built, and it keeps circling back to x86,” Peddie said. “They know, ‘We do x86 really well, so let’s just do it.'”

This means we’re likely to see some tweaks to Larrabee for the HPC market and Intel will continue to try to bring research, such as the efforts it announced last week to bring a low-power 48-core chip to market for highly virtualized environments, out of the labs  and into the market. After all, a hammer is a pretty essential tool to own.

  1. One of the strengths of OS X from the NeXT heritage was the ability to run on 68000 and X86. It took Apple a long time to admit this, but the step to ARM was a short one. In my early thoughts on mobile, the ability of chips to gather together into an ecosystem of sorts was an incredibly powerful possibility. The ability to have a mobile processor that works with a desktop processor and perhaps graphics chipset is necessary for the future, and now as we step into the era of cloud services, the boundary between local and remote resources is going to increasingly blur.

    Perhaps we’re equally seeing the limits of what developers are interested in developing. Software needs to become refined and useful rather than simply “powerful.”

    Share
  2. This is an interesting question, although I wish there was more technical meat to the article.

    I remember the RISC/CISC wars from years ago, and mostly thought it was settled with the hybrid core running who knows what and x86 asm itself essentially becoming a bytecode for virtual machine running on the chip. (I believe originally thought-up by transmeta who made a chip to run any assembly binary). But, I would imagine this has costs in the silicon, and as we go north of 8 cores, does it still make sense to do this translation at runtime.

    I would not count Intel out though, even if the RISC pendulum has swung back, they have the commercially failed, but still humming along in research Itanium.

    Anyway, this is the limits of my silicon understanding, I would love to hear from someone who really knows this stuff. Is the x86 instruction set baggage that won’t scale? My intuition is that between smarter compilers, and on chip translation, the instruction set is immaterial.

    However, this must

    Share
  3. devarajaswami Monday, December 7, 2009

    What’s all this nonsense about the Larrabee graphics chip using the x86 instruction set? Graphics chips are not CPUs or general purpose processors. They have their own specialized instruction set which are used directly by the software graphics drivers on the general purpose host CPU – which for Intel is an x86 CPU.

    So I totally fail to see the point of this article in trying to connect delays in Larrabee to some kind of momentous question about the host x86 CPU architecture.

    The delay in Larrabee just means that Intel feels it is not good enough in making high performance GPUs. Doesn’t have anything to do with their CPUs.

    Share
  4. Great article, however I dare to differ to see the point in Intel’s decision. I believe it’s not x86 architecture, Intel’s hammer primarily, it’s actually the backward compatibility with incumbent OSs. Haven’t they notice OS doesn’t matter any more in the mobile space? Soon it’ll happen in netbook market as ARM’s spreading quickly.

    Share
  5. @devarajaswami
    Larrabee was/is being build using pentium class cores with additional goodies tossed in. Looks like they could not scale to get the performance comparable to NVidia and AMD which used a lot more , but simpler, special purpose cores

    Share
  6. [...] in the graphics market where ironically the company just experienced a huge setback when it delayed its Larrabee graphics processor. : AMD, FTC, INTC, Intel, NVDA, Nvidia 0 0 0 [...]

    Share

Comments have been disabled for this post