Updated. Nvidia, (s nvda) the graphics chipmaker, announced Project Denver on Wednesday, a plan to use the same chip architecture found in cell phones in high-performance-computing servers. The move broadens Nvidia’s relationship with ARM, (s armh) expands its market and puts Nvidia in even more competition with Intel (s intc), as well as with startups such as SeaMicro and Calxeda (formerly known as Smooth-Stone). It’s also an indication that energy efficiency is becoming more important in the data center and is another blow for those pushing an x86-centric view of the world.
At the Consumer Electronics Show (CES) in Las Vegas, Nvidia said it would build an ARM-based processor for servers that it would integrate with a GPU, but didn’t offer details about how or when that chip would make it to market. The graphics chip maker has been eying the server market for a while, and has sort of snuck in with GPUs engaged in high-performance computing jobs such as rendering and data crunching. Update: This chip is aimed primarily at the supercomputing market, but the world of high performance computing has a large influence on web and cloud infrastructure. It would also be the first time ARM chips would be used in the supercomputer world. I wrote about Nvidia’s push for the data center back in 2008 when the company announced its CUDA software tools, which made it possible to run corporate computer programs written for Intel’s x86 architecture on the different GPU architecture. I said:
Remember when CPU processor speeds were the driving force behind new computers? Going from a 500 MHz to 1 GHz then 2 GHz machine meant noticeable improvements. Then chip vendors started adding more cores. But for the style of computing consumers use today, it’s not about the CPU anymore.
It’s all about graphics processors. Thanks to today’s visually intensive style of computing, a good GPU can improve the user experience much better than a fast CPU. In the data center certain tasks are moving from commodity CPU boxes to GPUs, meaning that over the next year or two, more of them will be sold for corporate computing use.
However, since 2008, high electric bills and webscale data centers have increased demand for lower energy output, as opposed to more powerful processors, which led to companies from Facebook to Microsoft (s msft) evaluating different low-energy processors such as Intel’s Atom chips or even ARM-based servers. Startups such as SeaMicro and Calxeda are also targeting the market, with Atom- and ARM-based servers, respectively.
Those startups are bringing a different architecture to the data center, with each offering a cluster of low-power processors and their own custom chip to manage the communications between the multiple CPUs. It’s unclear if Nvidia has a partner to build the server, or if it has plans to adjust the server architecture to run its Project Denver chip, but the news here is more confirmation that the silicon business is getting far more interesting as the world demands more compute at lower power levels.
Intel’s x86 crown is looking played out and upstarts are moving in. Using a widely licensed architecture from ARM opens the gate for more players to build chips, which could lead to different hardware architectures for different types of compute jobs. That has repercussions from the silicon all the way up to the way software is optimized to run on that silicon. Today we are living in a commodity compute world, but for how much longer?
Related content from GigaOM Pro (subscription req’d):
- Supercomputers and the Search for the Exascale Grail
- Pushing Processors Past Moore’s Law
- Thing Converged Infrastructure Means Lock In? Think Again.