14 Comments

Summary:

The server market has experienced four phases of massive change over the last 25 years. Each time, the incumbent technology was replaced by “lesser” technology that offered to get the job done reasonably well but for a fraction of the price. Now it’s ARM’s turn.

serverroom

Over the last 10 years, the consumer electronics and information industries witnessed an explosion of innovation. The evidence is on our screens and phones every day, and touches our lives at work, at play, and on the go. One of the key drivers is an innovation spiral fed by the dynamics in the open source software movement. Thousands of programmers leverage the work of others to efficiently deliver unique value, but all within a common software architecture framework. A magnifying effect akin to compounded interest kicks in, and we see this innovation spiral take root and grow like a tornado.

We’re experiencing a similar force in the hardware universe. Multiple hardware companies develop vertical solutions optimized for narrow problem sets, but around a shared baseline: the ARM architecture and ecosystem. And now, the pace of innovation we have enjoyed in mobile phones and tablets is about to invade the conservative world of the data center and its more than 30 million servers. A $50-billion market, and the landscape of an entire industry, is at stake.

Over 6 billion ARM processor cores shipped in 2009. There are two primary reasons ARM is such a popular platform. First, ARM approaches a processor design with power consumption as the primary design consideration, even ahead of performance. After all, how critical is the speed of the processor if the phone can’t last all day?

Second, and perhaps even more important, the ability to license ARM technology allows an entire ecosystem to build on the same basic design. Unlike x86 processors, such as those offered by Intel and AMD, ARM processors are licensed to scores of semiconductor design firms. Calxeda is new to the list and focused on developing ARM-based semiconductor platforms for the server market, which is ripe for innovation.

The server market has experienced four phases of massive change over the last 25 years. Each time, the incumbent technology was replaced by “lesser” technology that offered to get the job done reasonably well but for a fraction of the price. Mainframes were replaced by minicomputers, which were replaced by UNIX servers, which are now being pushed out by x86 servers. It won’t stop there. The cost of energy and space requirements for data centers now approaches or exceeds the cost of the server and networking hardware itself. When you have 20,000, 50,000, or even hundreds of thousands of servers, a power guzzling data center quickly becomes a barrier to business innovation and a significant cost hurdle for the business to clear. Green now means more than good stewardship and social responsibility. Green means big dollars and Euros.

ARM-based servers will consume a fraction of the power and space demanded by today’s most efficient servers. Performance per “core” will be lower, but clusters of these efficient nodes will consume perhaps as little as 1/10th as much power to deliver comparable performance. Couple that with huge gains in performance density to realize massive savings potential in data center capital expenditures. Conventional wisdom in data center planning will undergo a rethink.

Challenges will exist in the widespread adoption of ARM-based servers, but the compelling economics and technology evolution will prevail. First, expect early adoption by the so-called hyper-scale internet data centers, where the software stack is relatively new, built on Linux, written in portable programming models, and has few dependencies on third-party software. Second, ARM is still a 32-bit processor, limited in general to 4 GB of memory per processor. That means it can’t run some of the newer software designed for 64-bit processors. For many applications, which break up large problems into smaller bite-size chunks, this will not represent a major hurdle. In fact, some applications find 32-bit more efficient than 64-bit, producing even more savings.

Looking out to the next five to 10 years, the expected enhancement of ARM to 64-bits, continual improvements in performance, and the ever-expanding software ecosystem will open up the rest of the server market for ARM server vendors to ply their wares in general-purpose applications and relational databases applications. The consumer ultimately wins, with lower cost pervasive information services available from great, big, green clouds powered by a little processor that could.

Karl Freund is VP of Marketing for Calxeda and Barry Evans is the CEO of the ARM-based server maker.

Image courtesy of Flickr user Torkildr.

Related Content from GigaOM Pro (subscription required)

  1. It’s a very interesting argument for ARM server, however, I don’t think ARM has nor need “next 5 to 10 years” to get to 64-bits (to get to server markets). It’s very likely 2 to 3 years ARM’d get there and likely would occupy a “niche” market, unless the “smart money” from Oracle, VM, Google, … move into ARM. Above all, I’d not write-off Intel!
    After this current “graphics obsession”, I guess, Intel will move to the low power-low performance area! We’ll see the duel of the decade: Intel vs. ARM! This will be “consumer ultimately wins”, as you wrote.
    What do you think about 128-bits ARM?

    Share
    1. Lucian Armasu Sunday, January 16, 2011

      Intel can’t even make Atom use as little energy as an equivalent chip, how do you think they’ll manage to make their server chips that efficient? ARM is not just another type of chips that could be relegated to niche status. It’s a fully disruptive one, which means it will continue to replace Intel/AMD chip at every level once they get powerful enough. There’s no market that Intel is in that ARM can’t and won’t attack once it gets in that performance range, which is only a matter of time.

      It’s much easier for ARM to rise in performance (and keep or lower energy usage in the same time) than for Intel to lower the energy usage of its chips. And even if somehow Intel’s chips become as energy efficient (highly unlikely), there are still the size and price problems. Intel’s chips are much larger and cost much more than ARM chips.

      Share
  2. Here is a question: if there are such great benefits to arm fabric in big data centers, why hasn’t Amazon switched yet? That is, they have the muscle and the IT horsepower to get it done and yet it seems that they are pretty far from doing it.

    Unless I am missing something..?

    Share
    1. Lucian Armasu Sunday, January 16, 2011

      Because nobody has even started selling ARM chips for servers yet. I don’t think Amazon will even consider it until 2012-2013 when Nvidia’s Project Denver will show up, which will probably be a quad or octo-core Cortex A15 chip at 2.5 Ghz per core, coupled with a high-performance Nvidia GPU – that’s if they’re aiming for highest possible performance.

      http://arstechnica.com/gadgets/news/2011/01/nvidias-project-denver-cpu-puts-the-nail-in-wintels-coffin.ars

      Share
      1. Corbett Baker Sunday, January 16, 2011

        Corel did this years ago with the Netwinder, so did Cobalt (Later Sun) I seem to recall that they were quite nice for what they did; provide basic web hosting in an appliance form factor.
        Amazon provides a platform for the vast market, right now most people are developing server software for x86-centric environments. I’d say Atom, or VIA Nano has a better chance than ARM, at least at the moment.
        Think multi-transaction, low cpu intensive applications like Facebook, streaming,etc.

        Share
      2. I was quite the fan of Cobalt, and owned several racks. If I am correct, they were MIPS albeit not far off from ARM. It was simply too far ahead of the times — moderately powered appliances. Blades should be a natural however.

        Share
  3. A DailyKix Top Story – Trackback from DailyKix.com…

    Look Out! Your Data Center Is About to Change Again…

    Share
  4. I wouldn’t write off Intel

    Share
    1. No one is saying to write off Intel, as they continue to push out more powerful chips that are becoming more and more power efficient. ARM has no intention of challenging Intel’s server dominance until 2014 (http://www.businessweek.com/news/2010-12-13/arm-plans-to-challenge-intel-s-server-chip-dominance-in-2014.html), and its going to be a long battle one of the two technologies (or a new technology) come out as a new chamption.

      Share
  5. Interesting article. I guess we’ll just have to wait and see what evolves in the server market.

    Share
  6. [...] Source:http://gigaom.com/cloud/look-out-your-data-center-is-about-to-change-again/ This entry was posted in Technology and tagged fraction, job, massive change, server market. Bookmark the permalink. ← MegaUpload Accuses ISP of Restricting Access To Its Services [...]

    Share
  7. Glad to see we started a good conversation. Lucian and Corbett have it right, imho. Servers will come out later this year through 2012 based on A9 and will focus on very large scale internet apps, or at the opposite end of the spectrum, very small servers. Different designs for different folks. And nobody should ignore Intel! We just want to offer people a choice for the right set of applications. If you are a Forrester subscriber, check out Frank Gillett’s paper called The Age of Computing Diversity; good read.

    Share
  8. >> UNIX servers, which are now being pushed out by x86 servers

    You’re kind of mixing OS and platforms here.
    I don’t see Unix (SysV/BSD/Linux) as being pushed out at all…

    Share
    1. OK, good point, mcpit. I should have said “Unix on RISC SMPs”. I’ve worked all my career in this space (HP, Cray, and IBM), and still see significant value for UNIX and RISC especially for large integrated applications, RDBS, and OTLP. But IDC’s data shows that UNIX/RISC has indeed seen massive defection to Lintel and Wintel. My belief is that there is potential for that hegemony to begin to see incursion by ARM servers in applications that are embarrassingly distributed/parallel. It will take a while, but there’s no way 2020 looks like 2010.

      Share

Comments have been disabled for this post