The compute and server world is changing rapidly, with companies such as Amazon, Facebook and Google dominating the web world and creating new lines of business based on ubiquitous connectivity and data. But as these businesses are built on top of tens of thousands of servers — possibly hundreds of thousands, if some high-end estimates are to be believed — they could change the value chain of server and silicon companies. We now stand at an inflection point where the needs of these webscale operators are influencing how servers are made and could even influence the chips inside them.
On one side of this transition is the commodity hardware built around x86 silicon that iterates on the old way of building out servers but strips away the nonessentials, such as redundant power supplies and vanity-sized chassis. The other side consists primarily of ARM-based architectures put forth by new and existing chip companies and server makers.
Depending on which view of the world wins dominance, the server market could change radically: It could not only bring in new chip vendors but also remake the entire concept of a server.
Which side stands to win? Let’s take a closer look at both.
Legacy hardware and the legacy advantage
Intel’s x86 architecture has a rich history in the server world, and as of 2010, x86 chips were in 90 percent of the world’s servers, according to Forrester. As the legacy contender, Intel’s architecture has the benefit of most applications’ having been written to run on it as well as most components’ being designed with x86 needs in mind.
There is an entire network of manufacturing and design engineers that are making everything from heat sinks to power adaptors that will work best inside systems running x86 hardware. In April, Facebook took the Intel paradigm a step further by building out its Open Compute initiative around AMD and Intel chips. As Andrew Feldman, the CEO of SeaMicro, noted in a panel in June at GigaOM’s Structure 2011 event, the Open Compute guys stripped a lot of the value from the hardware, but they left the silicon price point intact. With Open Compute the servers are fairly basic, and the parts and components (even the motherboard) are all designed to be interchangeable among vendors. The chips that sit at the heart of the box, however, are x86-based, which leaves little room for commoditization of the silicon itself, since Intel owns the license to the architecture. This means that it is hard to innovate at the chip level unless that innovation comes from AMD or Intel.
Last month, prior to an event launching the Open Compute Foundation, which is dedicated to taking Facebook’s vision of highly efficient webscale computing to the masses, Frank Frankofsky, the director of the foundation and the chief architect at Facebook, said:
The main thing we want to achieve is accelerating the pace of innovation for scale computing environments and by open sourcing some of the base elements we will enable the industry in general to stop spending redundant brain cycles on things like re-inventing the chassis over and over and over and focus more on innovation.
That’s awesome if you are trying to build a network of tens of thousands of servers designed to run a cloud or a single application.
Or is it? Even though Facebook has said it has achieved a power usage effectiveness (PUE) ratio of 1.07 ( that compares to an EPA-defined industry best practice of 1.5) in its first data center to be built using the Open Compute specifications, could data centers be even greener? By stripping the hardware away and still relying on CPUs that are notorious energy hogs, are Facebook and other members of the Open Compute Consortium, which includes Goldman Sachs and Rackspace, keeping the industry from making a huge leap in energy efficiency?
New architecture for new apps
There are those who think so, but they tend to have their own agenda that stands in the way of proving their theories in the market. For example, Barry Evans, the CEO of Calxeda, a company using ARM-based chips to build servers, has said he plans to achieve a PUE ratio of zero with his servers. This month the company launched a server that, when running at full utilization, maxes out at 5 watts, or about a fourth of what an x86 server can get down to.
ARM’s dominance in the mobile world gives it an in with the webscale world, especially since Microsoft has ported its Windows operating system, the quintessential x86-based software, to the architecture. Granted, there will always be legacy software applications that are designed for x86, but for new apps, x86 is no longer the default: building for the web is. And when one is building for the web, which does not require massive performance but does require the ability to scale out cheaply (both in terms of the hardware and also in terms of the power consumption costs), differing classes of chip architecture may be a better fit. This shift in application development as well as the increasing focus on power consumption in the data center has given ARM the crack to exploit in the data center.
To do so, ARM has participated in an investment round for Calxeda as well as signed over architecture licenses to firms like Marvell, Nvidia and AppliedMicro, which all plan to build chips that will be used in data centers. Mike Muller, ARM’s CTO, explained the multiplicity of licensees as an advantage and source of innovation. He may be right. On the mobile side, ARM licensees include Qualcomm, Apple and a wide variety of companies in between that have taken the ARM architecture and tweaked it to make their silicon optimal for a specific function (mobile phones and media-playing tablets), not to mention incredibly power-efficient. There is no reason that AppliedMicro can’t take ARM’s architecture and create the equivalent of Apple’s A5 chip for the data center. (Actually, there are reasons, such as ARM’s not matching the gigahertz of an x86 chip or the software compatibility with older applications, but the need for server silicon to be x86-based is no longer one of them.)
But ARM isn’t the only company building silicon with an eye toward cloud and webscale computing. As broadband connectivity changes computing located in clients running on the network and in massive data centers, chip makers are responding with more-power-efficient chips and also those that can perform smaller workloads to handle the many parallel tasks required in a cloud world.
Tilera, a startup out of MIT, is building massively multicore chips that perform more calculations using less energy. Adapteva is trying a similar trick, and it wants to bring its chips into the high-performance computing and phone markets. IBM is investigating how to build chips that mimic the human brain; the company believes that if everything has intelligence, programmers will have to change the way they build software in order to direct thousands or millions of nodes to handle sensor networks or other jobs. But in the data center world, the biggest threat to the established x86 way of doing things will come from ARM.
Legacy and the status quo
In theory the power problem and the changing nature of building applications favors a new architecture for data center silicon. We have pointed out that it is no longer the PC that influences the design of chips: Rather, it is the cloud. However, Intel is not willing to cede its dominant position and has been pushing the power usage of its chips lower; it is also rethinking the architecture in chips such as its integrated core architecture, which pairs an x86-based CPU with a 50-core chip.
Plus, with the Open Compute Foundation, Intel is lining up powerful webscale users that are ready to back the status quo, or at least a status quo that still involves Intel silicon. Because for now, the price of building apps and running hardware that uses the 30 years of x86 experience that is built into servers is still less than relearning how to optimize equipment and software around an entirely new kind of silicon design.
So while companies may be willing to try as Facebook was with Tilera or Microsoft apparently is with ARM, the alternative silicon camp doesn’t yet have the likes of a Facebook or a Rackspace eager to sing their song.