Blog Post

Intel: We’ve always been serious about microservers. No, really

Stay on Top of Enterprise Technology Trends

Get updates impacting your industry from our GigaOm Research Community
Join the Community!

Sometimes it’s fun to watch a giant tap dance.

That’s essentially what happened today when chip giant Intel (s intc) hosted a call with Matt Adiletta, an Intel Fellow, who all the way back in 2006 was charged with figuring out what blade servers were all about. His journey of discovery took him to nascent cloud service providers, financial service CIOs, even Andy Bechtolsheim and has culminated in Intel’s embrace of what it calls microservers — highly dense, low-power machines aimed at emerging workloads.

Adiletta’s narrative danced around the fact that Intel is facing growing competition in this sector from established and new chip firms using the ARM-based architecture, but also that Intel has been pretty late to the microserver party — although it did coin that term. Even today, when it was parading an Intel Fellow before the press, the chip giant seems decidedly unenthusiastic about the segment and reluctant to claim they will be a big business.

Wait: how big is this microserver market?

When asked if he thought microservers would represent more than 10 percent of the market in the next three to four years — a number Intel has stuck with since 2011 — Adiletta hedged, saying, “I think it’s too early to tell, but it’s a reasonable first approximation … the software is evolving … but I think what we have to do is assume it is at least that.” He deflected then by noting, that while he and Intel may be unsure, “Our customers don’t know either.”

Adiletta also deflected questions about Intel’s decision to buy interconnect assets that might lead to the creation of fabrics for such highly dense servers or might allow Intel to integrate a switch onto a system on a chip for scale-out environments. Instead, the call was an attempted history lesson on how Intel has long believed in this sector, despite the fact that its initial launch of an interest in microservers back in 2011 was rushed and looked hastily put together as a reaction to ARM getting aggressive about the data center market.

notintelNow that ARM has a bevy of server makers and chip firms embracing the idea of the ARM architecture in the data center, a growing software ecosystem, and 64-bit chips coming next year, Intel seems to be trying to walk the line between downplaying the market and assuring customers that it is ready for “wimpy cores.”

Intel embraces the big.Little strategy too

In general, Adiletta took as many opportunities as possible to point out how Intel has the chops to manage the data center and give enterprise customers what they want — even with a lower performance, low-power processor such as Atom — while underplaying the architecture changes that Intel has been making to Atom to get it ready for the server market. Adiletta also echoed the same Big.Little strategy that ARM has laid out for its next-generation chips — namely that it will have faster, brawnier Xeon cores that can be combined with lower-performance, more power efficient Atom cores.

He offered the example of a Hadoop cluster, a common use case for parallelized wimpy cores (see here for an x86 example or here for an ARM-based one). For the name nodes that send the processing job to the domain nodes, a more powerful Xeon core works better, while the processing could be handled by smaller, Atom cores, he said.

In the end, this call cemented what we already know about the coming fight between Intel and those pushing ARM-based products in the server market — Intel thinks it has the legacy software and understanding of what server customers need, while ARM will tout core designs that will consume less power.

And as much as it can’t stand to admit it, Intel is worried about losing the microserver part of its server business to ARM — a business that will probably end up being more than 10 percent of the market.

5 Responses to “Intel: We’ve always been serious about microservers. No, really”

  1. Pilgrim Beart

    fGreat piece.
    Intel’s own-fab model doesn’t scale, so it is confined to a shrinking “high cost, low volume” niche.
    ARM’s IP model does scale, so the ARM ecosystem can ride the cost vs. volume curve all the way down to ubiquity – as it has already done in mobile.
    Also, efficiency (vs. brute force) is the law-of-physics “speed limit” which eventually dominates every sector (our brains consist of billions of slow processors for the same reason).
    In the history of mankind more cycles have already been executed on ARM cores than on all other processors combined.

  2. Doesn’t sound exactly like big.Little to me. ARM’s low-power and high-performance cores are on the same SoC/chip. This just sounds like they will use both Atom and Xeon chips in a server rack or something. I doubt it will be as efficient switching between them.

    Anyways, the problem for Intel is that they are very reluctant to even promote Atom for micro-servers, and it shows from how they talk about it. They have a conflict of interest, because they’d rather sell the much more profitable “bigger” chips.

    The reason why this is a problem for Intel is because ARM has absolutely no problem trying to sell ARM chips for servers. In fact they have all the incentive in the world to do it, while Intel has the least incentive to do it. As Clayton Christensen puts it, Intel will be “happy to concede the low-end, non-profitable (for them) market to their disruptive competitors”.

    This is why Intel will ultimately lose all markets to ARM (could take a decade or more, though). Because ARM thrives on extremely cheap cores, while for Intel it’s absolutely VITAL for the company’s long term survivability to be able to sell high-margin chips, because that’s how their company is built.

    As ARM chips get ever more powerful and “good enough” for most devices (that includes laptops, desktops, in the coming years), that means Intel will need to compete with $20 chips, instead of $200 chips with ARM. Look at their IVB 17W laptop chips right now. They are like $250 a piece right now. That’s absolutely unsustainable for Intel in the long-term. ARM is very close to reaching that performance level (only ~3x behind), and their chips will be an order of magnitude cheaper.

    Intel simply can’t survive in that environment – not in the consumer market at least. They’ll probably manage to survive as a company for a decade or more in supercomputers and whatnot, but it’s only a matter of time before ARM chips get them there, too. In fact Nvidia’s Project Boulder is already oriented towards super-computers, too.

    • @Lucian Finally somebody brought out the price-point aspect in the Intel v/s ARM+ (ARM eco-system) debate. Usually, it’s always along the lines of how Intel will blow away ARM+ with their next iCore chip :)