15 Comments

Summary:

Behind popular web services such as Facebook, Google and Amazon’s AWS are racks and racks of computers serving up millions of pages or providing raw computing power. The use of thousands of servers to deliver one application or act as a pool of computing resources has […]

Behind popular web services such as Facebook, Google and Amazon’s AWS are racks and racks of computers serving up millions of pages or providing raw computing power. The use of thousands of servers to deliver one application or act as a pool of computing resources has changed the way that chipmakers and computer vendors are building their products. It has also led to the rise of the mega data center.

Intel estimates that by 2012, up to a quarter of the server chips it sells will go into such mega data centers. Dell, which nearly two years ago created its Data Center Solutions Group to address the needs of customers buying more than 2,000 servers at a time, now says that division is the fourth- or fifth-largest server vendor in the world. In the meantime, suppliers are creating product lines and spending money on R&D to adjust to the needs of these mega data center operators, as those operators are fulfilling an increasing demand for applications and services delivered via the cloud.

The mega data centers running computing clouds are becoming more distinct from both their corporate cousins, which have to run multiple applications, and the high-performance computing systems that combine multiple CPUs with expensive networking equipment. In a webinar held Wednesday, Russ Daniels, CTO of Cloud Strategy Services at Hewlett-Packard, explained some of the differences to one of the company’s customers.

“In HPC and grid computing…we tend to focus on workloads that would be important enough to deserve specialized hardware,” Daniels said. “Cloud computing is the same technological approach of doing work in parallel but done in the context of a commoditized network architecture and hardware.”

In a nod to the shift in computing, HP last year reorganized its high-performance computing and commodity servers designed for mega data centers into its Scalable Computing Initiative. But so far, it’s Dell that’s created a business around building customized servers for each customer using off-the-shelf hardware. Indeed, Dell understands that tiny savings in hardware spread out over thousands of servers mean huge price cuts for customers.

For a data center customer that doesn’t need a swappable fan in place, the savings of $10 offered by placing a permanent fan inside the server, multiplied by thousands of servers, adds up to real dollars. Instead of discounting its normal servers for large-volume buyers, Dell offers them exactly what they want  and still makes money on the sales.

Jason Waxman, GM of high-density computing in Intel’s server systems, says that company is learning the same lessons, especially when applied to the cost of power to run those data centers. In a conference call on Wednesday to talk about Intel’s ties to cloud computing, he compared mega data center owners to a car rental firm, noting that when a consumer buys an automobile they look for the best individual features, but when Hertz buys a fleet of cars, they want the set of features that costs them less to operate.

For Intel, that means power savings. Waxman said that since 25 recent of the costs of running one of these mega data centers can be traced to power consumption, Intel is designing motherboards so they can be cooled more efficiently, offering software that keeps servers from running too hot and participating in a variety of projects to bring power costs down.

On the chip side, many of these gains have and will continue to trickle down to all server products, but if the operators of these mega data centers become too successful at delivering computing and services through the cloud, the pool of customers for HP, Dell, Rackable and IBM may get a lot smaller.

This article also appeared on BusinessWeek.com.

  1. [...] Continues @ http://gigaom.com [...]

    Share
  2. Hi Stacey, I am researching to find an excellent speaker on this subject, the shift to more and more Cloud Based data Centers, their benefits, their risk management and especially the environmental consequences in a future faced by energy shortages and global warming as a result of carbon emissions and heat generated. Its for an Innnovation & Thought Leadership Festival called AMPLIFY that I produce and curate in Australia. Can you recommend someone that is not too geeky or technical but an articulate thought leader?

    Share
  3. John Harrington Friday, February 20, 2009

    Flip side of the mega centers are the highly dense, compact distributed footprints that are now able to proliferate across enhanced optical bandwidth tracks globally with more capability on the way.

    http://www.huawei.com/innovations/100g_wdm.do

    Think millions of cheap to build and own, completely self-contained and secure “data centers in a (relative shoe) box” hanging off redundant 100 gig per second PoPs. The exchange points, which are the highly energy intensive, hyper expensive problem right now can be modified into very flexible, efficient packages connected by ultra capacity fiber.

    While some outfits like Google, et al, will be posted up in their massive locations, even Microsoft will be operating out of many smaller boxes linked together, as in their Microsoft Live data center in Northlake, Ill.

    Stacey – I’m sure you are probably a member of the Uptime Institute – but, if you’re not, please dig into the global movement to stop building these energy-hogging refrigerated computer bunkers.

    http://www.uptimeinstitute.org

    Share
  4. [...] specialized jobs), but the focus on power is waning. Dessau spoke of a shift happening in the way companies buy servers, where the performance isn’t a function of clock speed, but of storage and I/O capabilities. [...]

    Share
  5. Thomas Whitney Friday, February 20, 2009

    Thanks for the article~

    What most appealed to me is the grand size of these mega centers.

    Makes you wonder how they keep digital security in check.

    I was wondering if you wouldn’t mind touching base with your readers about how exactly these large systems manage to secure their data. I was browsing through http://www.justaskgemalto.com and found some interesting information.

    I was just wondering how you might expand on that.

    Share
  6. @ Annalie Killian

    Get Chuck Thacker (Microsoft) if possible – excellent speaker on this topic. yes, he works for a company that often doesn’t get it in a corporate sort of way but Chuck has a knack for explaining the data center issues like few can and he did grow most of his gray cells before joining Microsoft. I heard him speak on this topic @ Stanford and he was clearly able to explain all the issues at scale in a very entertaining and engaging manner.

    see his bio @ or wikipedia
    http://www.microsoft.com/presspass/exec/techfellow/Thacker/default.mspx

    Share
  7. [...] According to Intel, nearly 25 percent of the costs associated with running big data centers can be traced back to power consumption. As we reported earlier, much of the power is drawn by pizza box style [...]

    Share
  8. [...] a market that’s growing increasingly competitive with Cisco planning a new line of servers dubbed the Unified Computing system, Dell creating a [...]

    Share
  9. [...] this particular metric (when tied to the cost of the chips) drives large-volume server purchases, which are becoming a bigger chunk of sales as companies like Google or Facebook build out their [...]

    Share

Comments have been disabled for this post