The Dawn of the Super Server

datacenter

We’re in the midst of a computing implosion: a re-centralization of resources driven by virtualization, many-core CPUs, GPU computing, flash memory, and high-speed networking. Some have predicted, only half-jokingly, that we will be able to buy a mainframe in a pizza box server that fits in a small fraction of a data center rack. That possibility — and in my opinion, inevitability — means we have a lot to watch over the next few years: what I like to call the coming of the Super Server.

The business drivers for the Super Server span power, management, new workloads, and big data needs. Let’s examine each briefly.

Power

Rising data center power bills, combined with a macro push to environmental friendliness, has led to a slew of power-optimized servers. Today, the three-year power bill for data center equipment can often equal or exceed the original capital cost. And with cloud architectures spawning mega data centers costing hundreds of millions of dollars, there’s plenty of room to reshape servers for power savings. We’ve already begun to see the impact with the announcements of Windows running on ARM processors, and emerging server vendors such as Calxeda and Sea Micro focusing on lower power chips that still deliver data center performance.

Management

In addition to power, space, and cooling costs, operating expenses are the other major data center equipment cost post-purchase. Data center administrators usually look to minimize the servers count, or server image count, requiring oversight. Today, through virtualization, architects can minimize the number of physical machines they manage while keeping the same number of server instances available through virtualization. Since virtualization tends to be memory-hungry and storage-hungry, placing more CPU, memory, and storage resources within a single server allow that physical server to manage more virtual machines. Administrators can tackle the same, or greater, number of applications and workloads with less physical equipment: a management and administrative win.

New Workloads and Applications

Our computing habits continue to evolve with Internet development, and new web businesses spur the need for supporting infrastructure. As an example, those running cloud data centers don’t care at all about having CD drives or extra USB ports on their servers, but they do need ways to handle fast and furious updates, millions of video downloads, or voluminous click-tracking. Web application areas like social networking, online video, advertising, and mobile applications require server architectures optimized for transactions, capacity, and web serving, all while minimizing power and management costs.

Big Data Needs

Our Internet-enabled information age has put us in a race to capture, process, and distill more data than ever. When Hadoop emerged as a dominant ,open-source implementation of Map Reduce, it forced a rethinking of storage infrastructure. Previously, many applications requiring large amounts of data made use of centralized storage in large arrays connected together with storage protocols. With the Hadoop Distributed File System, data was intended to be close to the CPU, and on disks within individual servers. So we’ve actually seen a move to pull storage out of the centralized array and back into the server. This triggered a return of larger servers with more internal drive slots to accommodate the storage capacity for Map Reduce operations, a key element of the Super Server.

Want to learn more about big data and the impact on infrastructure? Be sure to check out Structure Big Data March 23, 2011 in New York City.

Gary Orenstein is host of The Cloud Computing Show.

Related content from GigaOM Pro (subscription req’d):

loading

Comments have been disabled for this post