It has become clear that ARM is invading the data center as chips built using its designs grow more powerful for enterprise computing loads and as the workloads and economics of webscale computing make the ARM architecture more attractive. But using ARM cores also changes the cost of designing a new chip, and frees the non-CPU elements on the chip from being dictated by a specific vendor.
Both of these trends are driving webscale companies to discuss making custom CPU cores for their specific workloads, and are allowing startups to try to break into the world of interconnect fabrics and memory management that used to be locked to a specific x86 core. Right now big web companies like Google and Facebook are designing and building their own gear but soon they may want to have a few chip designers on hand as well.
With ARM custom server chips are cheaper and faster
Andrew Feldman, GM and corporate VP at AMD, explained this idea in a series of conversations with me over the last few weeks in which he estimated that one could build an entirely custom chip using the ARM architecture in about 18 months for about $30 million. He compared this to the three or four-year time frame and $300 million to $400 million in development costs required to build an x86-based server chip.
He wrote in an email:
This vast change in the cost and time to market opens the door for large CPU consumers (mega data center players) to collaborate with ARM CPU vendors–say by paying part of the cost of development– in return for including custom IP that advantages the Mega Data center owners software/offering. There are conversations underway on this topic with nearly every major megadata center player in the world. The mega data center owners are building warehouse scale computers. It is not surprising that they have ideas on custom IP that would advantage their own software and would like that IP embedded into a CPU — ARM makes this feasible by bringing down the cost and development time of CPU creation.
This is a deeper component to Feldman’s talk at Structure about how the shifts in personal computing habits from PCs to less powerful mobile phones and tablets has changed the infrastructure needs of web companies that do most of the computing in the cloud for these devices.
Yet custom chips aren’t limited to improving the computing economics at webscale vendors. Feldman said there is a company using custom-made chips on boards to mine Bitcoins, a rumor I had heard at our Structure event, but couldn’t confirm. I’m not sure if the custom-chips are using ARM cores however.
The point however is the same. Building a custom-chip that can efficiently mine Bitcoins is totally worth the cost of building such a processor.
But wait, there’s more!
So the Facebooks, Amazons and Googles of the world may soon make their own chips so they can take full control of their computing economics. It may already be happening: Back in 2010 Google bought a company called Agnilux that reportedly was trying to build a power efficient server chip, and details about what Google did with that company are scant. Maybe it’s designing its own server silicon already, much like Apple designs its own iPhone and iPad processors.
But the use of ARM cores for the CPU also means that there’s a secondary effect for startups and webscale vendors. Today’s CPUs are generally composed of the CPU core and all the related IP associated with how the CPU gets and sends bits; things like I/O, memory controllers, PCI Express interconnects and other elements that most people don’t ever think about are also on a typical x86 chip. Feldman calls these other elements Bucket 2 (the core is Bucket 1).
Because Intel and AMD were the only x86 game in town we each developed our own cores (Bucket 1) and each developed our Bucket 2 items as well. There has no been ecosystem, and little innovation in Bucket 2. We have limped along with incremental improvements from one generation to the next. It hasn’t made sense to start a company to do an on-chip fabric (to more intelligently tie cores together), or a better memory controller, or an application accelerator, because you only have two prospective customers, Intel and AMD both working on competitive projects. (Who wants to start a company with 2 potential customers?)
But in the ARM world things are different. Because any number of players can license an ARM core, each one is looking for points of differentiation outside the core (some with architecture licenses are looking to tweak the core itself) and can make chips with better I/O or specific workload accelerators. An example here is Calxeda, which is using an ARM core in highly dense servers but has also built a custom interconnect to send information rapidly between its hundreds of ARM cores.
So when the mega data centers look at the opportunities presented by ARM, it’s not as simple as buying a piece of silicon from Marvell or Applied Micro, or a Calxeda box from HP. According to Feldman, web giants are looking at co-developing ARM-based chips that will take advantage of the greater levels of customization offered outside of the CPU so they can optimize for their own applications’ needs.
This is a huge shift for the industry with big implications from players as diverse as Intel, the server makers and corporate IT buyers who suddenly may face higher cost computing than the web and cloud giants — making the move toward outsourcing IT more practical.