Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
We are moving from the Information Age to the Insight Age, as Parthasarathy Ranganathan, an HP Labs (s HPQ) distinguished technologist told me. As part of that shift we need a computing architecture that will handle the storage of data, and the heavy processing power required to analyze that data, and we need to do it all without requiring a power plant hooked up to every data center.
The shift is a move from creating scads of information in a format that can be stored cheaply, to being able to process and analyze that information more cheaply as well (all the while adding new layers of data thanks to a proliferation of devices and networks). The challenge is that under the current computing paradigm, adding more processing is problematic both because it’s becoming more difficult to cram more transistors onto a chip, and those chips and their surrounding servers are sucking up an increasing amount of power.
With Power Consumption, The Question is, “How Low Can You Go?”
“Data is expanding faster than Moore’s Law and that’s a compelling problem that we’re trying to solve,” Ranganathan said. It’s apparently a problem that Intel’s Kirk Skaugen (s INTC), vice president and general manager of the chipmaker’s Data Center Group, is thinking about too. Skaugen said at a speech last week at Interop that there were 150 exabytes of traffic on the Internet in 2009, and 245 exabytes in 2010, and the Internet could hit 1,000 exabytes of traffic by 2015 thanks to more than one billion people joining the web.
That’s a lot of bandwidth. But it’s also a lot of data and a lot of compute demand. Listening to Skaugen’s speech it appears that Intel’s primary function will be to convince the people who build the machines that process those exabytes of data, that their machines should run newer and more energy-efficient Intel processors. But is Intel’s architecture — and an upsell to its trigate 3-D transistors — the right chip for computing and big data’s future?
As I noted before, Intel’s much vaunted 3-D transistor advancement is cool, but only gets us so far in cramming more transistors on a chip and reducing the energy level needed. For example, a 22 nanometer chip using the 3-D transistor structure consumes about 50 percent less energy than the current generation Intel chip, but less than an Intel chip using the older architecture would at 22 nanometers (squeezing in more transistors also helps reduce the power consumption). And when we’re talking about adding a billion more people to the web, or transitioning to the next generation of supercomputing, a 50 percent reduction in energy consumption on the CPU is only going to get us so far.
For example, scientists at the Department of Defense estimate (GigaOM Pro sub. req’d) that getting to the next generation of supercomputer at the current architectures would require possibly two power plants to serve every exascale computer — reducing that to one is great, but it’s not good enough. This is why the folks at ARM (s armh) think they have an opportunity and why the use of GPUs in high performance computing is on the rise.
A New Architecture for a New Era
But there is more to this trend than merely eking more performance for less power — there is also a more subtle shift to matching your processors to your workloads in an acknowledgment that generic CPUs running x86 processors might not be the best solution for all workloads, especially in a cloud world. For example, startups are already trying to build optimized gear for companies such as Facebook or Google (s goog) that can then run their own software on top of these optimized platforms.
Don’t believe this is coming? Take a look at Facebook’s Open Compute efforts. This kept the same x86 processors made by Intel and AMD, (s amd) but it was willing to question everything about the architecture of the servers and data centers those chips were house in. And that willingness to question everything is occurring at firms all over the world that are dealing with massive compute needs– a trend Intel can’t help but find worrying and folks such as Ranganathan at HP see as their big chance.
“Historically there is evidence that each killer app has an influence on the architectures that are preceded by the special purpose alternatives,” Ranganathan said. “So asking what instruction set for the processor, or if you want powerful or wimpy processors or special purpose processors are all legitimate architectural questions that we need to answer.”
HP’s answer is its concept of nanostores, chips that tie the memory and the processor together using a completely new kind of circuit called a memristor. The basic premise for HP is that 80 percent of the energy inside a data center is tied to moving data from memory to the processor and then back again. We’re already seeing the trend of moving memory closer to the processor (that’s what the addition of Flash inside the data center is about) to speed up computing.
But instead of next-door neighbors inside a box, HP essentially wants processing and memory married and in the same bed. HP won’t give a timeline on when this vision will become reality, but it has a manufacturing partnership with Hyinx it announced in 2010 to build such chips.
So Where’s Intel in This Architecture?
So when Skaugen gets up at Interop to push Intel’s 3-D transistors and the incredible inflows of data coming online he’s also making a pitch for Intel’s relevancy because big data processing is one of the areas where a general purpose CPU makes a lot of sense. So while folks may adopt GPUs for better supercomputing or data visualizations, or ARM may keep its upward momentum into more and more mobile computers or win some server designs in webscale businesses that can see a use case, just crunching those numbers associated with big data could become Intel’s game to lose.
There are plenty of folks hoping Intel will lose it (or at least that they will stand to gain) — not just Ranganathan at HP, but also the guys building 100-core chips at Tilera, or those hoping that the mathematical affinities inside digital signal processors might make them a good choice for data. It’s a topic I can’t wait to explore with Ranganathan, folks from Intel, Tilera and others at our Structure 2011 event on June 22 and 23. Because just like the steam engines and trains of the Industrial Age had to give way to the tools of the Information Age, the PCs and current servers used today will become a footnote as we pass into the Insight Age.