7 Comments

Summary:

Both mobile and high performance computing are placing huge power efficiency and performance demands on chips, but the real $64,000 question is how long until such extreme computing use cases hit the server mainstream. Asked another way, how long till Amazon adopts ARM-based servers?

Servers? We don't need no stinkin' servers!

Servers? We don't need no stinkin' servers!

Both mobile and high-performance computing are placing huge power efficiency and performance demands on chips, but the $64,000-question is how long until such extreme computing use cases hit the server mainstream. Asked another way, the question becomes, how long until Amazon Web Services adopts ARM-based servers?

Or perhaps it isn’t ARM-based servers, but a variation on an Intel chip that takes its architecture from some of the more innovative and energy-efficient silicon out today. For example, Adapteva, a startup I profiled back in May, released a 64-core chip on Monday that can deliver 70 gigaflops of performance per watt. If you don’t speak gigaflops, that’s okay: It basically has done what Intel and certain countries have deemed impossible with the current generation of silicon.

The government of the European Union, in its quest for an exascale supercomputer, has targeted a goal of getting 50 gigaflops per watt (Intel also thinks this would work). In conversations with folks that design supercomputers, the thinking is that a conventional x86-based machine would require the equivalent of a power plant or two to run. That includes all the networking and other trimmings, but the bottom line is that Adapteva’s chips deliver more flops per watt, and that’s a good thing.

It’s not just supercomputers though. Adapteva’s CEO Andreas Olofsson told me the company is only targeting computing extremes such as supercomputing and mobile phones because that’s where the power efficiency pain point is today. Because mobile phones run on batteries, and no one wants a smartphone that dies after two hours, vendors using ARM’s power-efficient architecture have dominated the mobile sector. When Microsoft adapted Windows to run on ARM, it spoke volumes about the need for power efficiency. Windows is one of the most x86-oriented pieces of software out there.

These shifts in usage profiles and the high demand for compute are creating opportunities for companies like Adapteva, so it’s not too far-fetched to wonder how long until that pain point hits conventional servers.

I often cover companies that are hoping the combination of monolithic applications and a desire to reduce power consumption means webscale and cloud vendors will embrace a new architecture. Companies such as Tilera, SeaMicro, Adapteva, Calxeda and others are all betting the next gear Facebook or Amazon buys will be their hardware or contain their chips.

However, even in its state-of-the-art data center optimized at the server level to be energy-efficient, Facebook challenged the way servers and data centers are built but didn’t touch the silicon itself. So, clearly, the webscale world isn’t champing at the bit to replace the x86-based servers their applications are running on. SeaMicro even has shown charts demonstrating that the CPU is only a third of the power associated with running a server, which means there’s still plenty of fat to trim. Of course, SeaMicro is building a server that trims that non-CPU fat and runs Intel’s Atom chips.

However, the global demand for energy and the supply we currently have are reaching a point where it’s safe to conclude that power consumption will become a greater cost and constraint associated with operating data centers. And at some point, building in cooler climates, hot and cold aisle containment, and even newly designed servers won’t be enough if the silicon itself is too hot.

So the question isn’t if, but when, server companies abandon the PC-style architecture. Perhaps Intel, AMD or Via will continue to tweak x86 silicon until it can perform more calculations using less power, or perhaps it will be time for Amazon or Microsoft Azure to go with ARM, Tilera — or even Adapteva.

  1. You can get around 900 GPU cores at 42 cents per hour on Amazon EC2 today. The challenge is that standard NoSQL languages like XQuery will not run on them.

    Share
    1. How long do you think that will last? Clearly it doesn’t make sense to recompile to reprogram for new architectures without knowing what will stick around, but after that point, surely we’ll see more applications tuned to different cores if there’s a huge efficiency advantage.

      Share
  2. Get ready for quantum computing. Several will go into places like Amazon in 2012. Mind boggling power.

    Share
    1. I’ve been getting ready for quantum computing for years. Still waiting :)

      Share
    2. Quantum computing is not the answer to all problems. In fact its a fast answer to very few problems

      Share
    3. Quantum computing is not the answer to all problems. In fact its only a shortcut answer to very few questions.

      Share
  3. As you point out, something has to give, and I believe we will see two types of solutions take shape. One will be based on ARM (or other) energy-efficient CPUs that handle general cloud computing requirements well, doing things like web searches, database management, and other Amazon/Facebook/Google functions. The other will be adapted to high performance computing requirements for complex tasks like video analytics or other, mathematically intensive operations and there we’ll see architectures like Adapteva or TI DSPs where the critical metric is Gflops/W. Each major cloud vendor will then mix and match solutions to get the right balance of computational requirements at the lowest power levels possible for their needs. I just blogged on a related topic the other day (http://tinyurl.com/3car3gx)and discussed some of the misconceptions between high performance computing and the broader cloud computing.

    Share

Comments have been disabled for this post