8 Comments

Summary:

Dell showed off a box that contains 48 ARM-based servers, joining others making boxes with processors that uses the same architecture as the chips inside your cell phone. The server consumes less power and could find a home in web servers and Hadoop clusters.

PowerEdge C-Series ARM Server - Detail

Dell showed off a box that contains 48 ARM-based servers, joining other vendors such as HP making servers  that uses the same processor architecture as the chips inside your cell phone. The move is a significant one for the chip world as well as for Dell, as it proves how the needs of cloud and webscale customers have turned the chip and vendor community on its head, and provides Dell with the potential to make a high-dollar system for clients and stave off commoditization happening in the world of x86 gear.

These servers will be available for certain customers of Dell’s webscale computing business, and will also be available later this year in select Dell labs and in the Texas Advanced Computing Center, so customers can test them out. Right now, the primary use case where ARM can offer an advantage is in web servers and in Hadoop clusters. But next year ARM plans to release a design that would allow chipmakers to build a 64-bit chip using its architecture. When that happens the types of jobs an ARM-based system can do, and perhaps its market share (see chart below), will grow tremendously.  Those systems are likely to hit the market in 2013 and 2014.

ARM wants to cash in on the need for low power data centers.

Dell’s new “Copper”  server contains 48 systems on a chip that run a quad-core Marvell processor built using the ARM architecture. Unlike Intel, which makes its own chips using the x86 architecture it owns, ARM licenses its architecture out to a wide number of chipmakers, who then build processors. The Armada processor uses 4 ARM-based cores to deliver 1.6 GHz of performance, about what a low-end Xeon chip from Intel offers. However, the Armada-based server runs at 15 watts, less than a quarter of the power a similar Xeon-based server would use. Intel is however, testing out its low-power Atom chips in servers that run at 15 watts. That power savings is the most sought after element of the ARM chips.

Power constraints have been an issue in data centers for years, as the demand for computing keeps rising. And ARM, with its designs built for battery-powered devices, has a power advantage. But to bring ARM into the data center requires two big shifts: One, software has to run on ARM-based chips, and two, vendors must design servers differently so ARM chips can communicate better.

When it comes to the software, things are progressing well. Just a few weeks ago, I watched Calxeda, a startup that is making ARM-based servers, show off WordPress, Hadoop nodes and others running on ARM-based machines without a problem. Ubuntu supports ARM, as do certain programs running on the LAMP stack and OpenStack will show off ARM support later this month according to Steve Cumings, executive director, Dell DCS Marketing.

Getting the software in place will take time and commitment by the end users that ARM is a solution they will pay for. But it is happening now with Cloudera and other firms testing out ARM-based servers. The launch of 64-bit ARM chips will help address more enterprise-level software as well. And while people ponder the software, others have been building out the systems side of the equation for years. Barry Evans, the CEO of Calxeda started his company in 2008 after realizing the coming need for power efficiency in the data center.

Many cores have to communicate.

Like Dell, Calexda is making ARM-based servers, but it’s taking a more systems-based approach. It has shown off a 5-watt server that can be crammed into a 2U box that contains up to 120 server nodes, but it’s not yet in production. Dell’s Copper servers aren’t available to buy yet either — only for select customers to test.

Any system that contains hundred of cores requires a pretty specialized communications mechanism to let the cores all work efficiently together. Dell accomplishes this by creating a fabric to allow the cores to communicate. Calxeda does the same thing with a specialized chip it designed. Other firms are catching on to the benefits of fabric, such as Intel, which purchased Cray’s interconnect business last month and AMD, which purchased SeaMicro, a company that also makes low-power servers and may one day use an ARM chip.

As the competition heats up and ARM makes its way into the data center, it opens up a new possibility for value on behalf of Dell, HP and other vendors while also giving data center operators a much wider array of processors to choose from. So while efforts such as the Open Compute Project threaten to commoditize the x86 boxes Dell and others are selling, ARM-based systems still offer a chance to put IP into a system that can result in higher margins.

And because ARM licenses its chips to a wide variety of companies, each company can design a core for a specific workload or maybe even a specific customer. Heck, someone could design an Open Compute certified ARM-chip if they wanted to. This will boost innovation and could also lower the cost of silicon, or at least offer more value for the end user. Let’s see what the next two years bring. Or you can come to Structure 2012 in June where the notion of ARM-based servers and fabrics will be a hot topic.

You’re subscribed! If you like, you can update your settings

  1. Keith Townsend Tuesday, May 29, 2012

    Wasn’t going to comment on the story until I saw this post. I can’t say if GigaOm has an hidden agenda or not but more competition isn’t just good for Google it’s good for consumers of cloud services as well. It’s bad for Intel and AMD.

    There is a place for systems with “1.6 GHz of performance” if these processors are efficient at what they do then there’s a place for them in the market. Believe me that Dell will discover if there’s not.

    1. And of course 1.6GHz is just the start of the conversation – not the middle or the end. Expect to see 40-bit and 64-bit ARM processors coming to the market far faster than people expect

  2. While I will agree with you that some of the posts on GigaOm seem very much vendor drivel, and don’t seem editorial or journalistic in nature. I definitely think you have the sponsors incorrect. I don’t see any evidence of the bias being toward Google, so why don’t you enlighten us about your proof that Google is the sponsor of the above article article.

    1. Stacey Higginbotham deeceefar2 Tuesday, May 29, 2012

      Brett is a know troll on the site. No one sponsors our articles, and your point below on workloads is a good one. I’ve covered that before in some of my other posts on eaMicro and the microserver segment in general.

  3. This is further evidence of the change in computing scale and the markets behind the scenes responding. The reality is that for the majority of web server workloads processors such as these are more then adequate. With a cheaper total cost of ownership, this is going to be one of many different specialized processor markets that are cropping up in the gaps of the previous computing paradigms. When you get to cloud scale the economics change such that it no longer makes since to lock yourself to traditional server configurations, and there is plenty of new innovation left in the in between server markets.

  4. Tech Marketer Wednesday, May 30, 2012

    Great information. Today, there are many companies offering hosted virtual servers. But choosing the right cloud service provider is the main key.

  5. while its nice to finally see some arm cores in the server space for profit, as usual HP didn’t do a good job of their sled design , remember that its a 3U case and so there is plenty of room in the active PCB space to put far more ARM SOC in there, namely i would have placed at least 8 standard quad ARM cortex SOCS on generic SO-DIMM form factor slots in that space alongside more ram slots for a start…
    http://armdevices.net/2011/03/04/toradex-shows-tegra2-computer-on-so-dimm-form-factor/ style, that way you can start with half populated slots for the current 4 SOC/16core sled product and populated that with more ARM SOC on so-dimm as time passes.

    also i see a large problem with HP picking the marvel ARMADA XP series SOC as ARMADA XP do NOT have or ever had a generic NEON SIMD unit inside unlike all Cortex SOC to date and so many of the Linaro NEON SIMD optimization’s for all the current ARM linux NEON SIMD optimised distro’s will OC not function in these servers so making them look far slower than they could be in real life workloads , “NO NEON NO Good” is the lesson here as even Nvidia discovered and replaced they older Tegra 2 SOCs without NEON SIMD PDQ.

  6. LOL and when i say HP i mean OC
    Dell not that it makes much difference as they both design bad SLED usage usually.

Comments have been disabled for this post