22 Comments

Summary:

Over the past few years there has been an explosion of low-cost appliance servers – also known as pizza box servers. And although they are cheap, price-wise, they are turning out to be veritable power hogs.

As everyone knows, you get what you pay for. That maxim certainly holds true for Internet infrastructure, especially when it comes to servers. Over the past few years there has been an explosion of low-cost appliance servers – also known as pizza box servers — and they now account for a formidable portion of the Internet infrastructure. And though cheap in price, they are turning out to be power hogs.

“These servers are cheap to buy but consume a lot of energy and their utilization is pretty low,” said Jonathan Koomey, project scientist at Lawrence Berkeley National Laboratory and consulting professor at Stanford University, who recently conducted a study on the power requirements for servers. “The utilization is below 20 percent and we really need to focus on virtualization to get more from these boxes.”

According to his estimates, volume servers, or the low-end devices that include the pizza box servers, consumed over 50.5 billion kilowatt  hours in 2005, up from 19.7 billion kWh in 2000. That number surely must have increased by now. From 1996 through 2006, the sales of volume servers jumped from 1.41 million units to 7.282 million units, according to information collected by International Data Corp., a market research firm.

“Some of these (pizza box servers) throw up a lot of heat and are power hogs,” said Tim Sullivan, Chief Technology Officer of Internap, a data center and CDN provider, during a call with company’s management earlier today. He said that they are asking clients to better utilize the data center space and putting fewer of these pizza boxes in a more efficient manner.

One of the reasons these pizza box servers are inefficient is because they’re being made to do tasks for which they were not built. Back in the late 1990s, the form factor became popular with the large corporations, and there was plenty of space (and power) in the data centers. However, as web infrastructure needs have increased, so has the number of servers. Even tiny startups are beginning to buy 1,000 of these boxes to just stay in business.

As their numbers increase, and the problems mount, it makes me wonder if we’ll soon see the pendulum swing to the “big iron.” Thoughts, anyone?

You’re subscribed! If you like, you can update your settings

  1. Thanks for tackling this issue Om. 20% utilization is quite poor and we can do better. I hope you keep your eye on this. I know Google is looking into it. Being more energy efficient should save money, too.

  2. Nah, we’re not headed toward big iron — we’re headed to more efficient blades. HP’s current C-Class blade servers are already much lower power than the previous generation, and the company’s Lights Out data center concept that they showed at the World Design Congress in October really shows a way to drive down the energy consumption and costs of modular servers without sacrificing the flexibility that has made them so appealing so far. Big iron isn’t going to be back any time soon.

    C-Class BladeSystem: http://www.hp.com/hpinfo/newsroom/press/2007/071112a.html

    Lights Out Data Center:
    http://www.eweek.com/article2/0,1895,1995884,00.asp

  3. No way. Bit Torrent, Amazon Web Services, and newer forms of virtualization. The future is the grid. We will all just use computers as a utility, paying for what we consume from the cloud. You know that, Om.
    But the post will work well to get comments like this.

  4. I don’t get it …

    “Even tiny startups are beginning to buy 1,000 of these boxes to just stay in business.”

    Isn’t that some heavy weight exageration there? C’mon this is a startup that is “Building a new open global search engine” and building a cluster to document the entire web.

    Kind of a silly statement

  5. As recently as two years ago (the last time I undertook a detailed analysis), blade servers weren’t much more efficient than individual 1U boxes – in either power or space. And they were significantly more expensive.

    It’s similar to going with DC power supplies: saves about 20% power usage depending on the vendor, but at the three data centers to whom I talked that could actually provide it, DC power was 50-100% more expensive on a watt-for-watt basis versus AC.

    If power is a growth constraint, AMD’s high efficiency processors help, but the higher up-front costs almost outweigh the power savings over what we used as the typical life of a server.

    Also, it’s worth keeping in mind that an idle server uses significantly fewer watts than a server being at full utilization, especially components like CPUs and hard drives (the big power hogs). Software power management is actually pretty good these days if correctly configured.

  6. i guess it would make a huge difference, But who has the money to pay for the expensive equipment

    Dale
    http://dzrbenson.com/blog/

  7. Phil Windley has a great write-up as he posted on ZDnet.com today. Here’s a link to his blog about DC power in the datacenter.

    http://www.windley.com/archives/2007/12/dc_power_in_datacenters.shtml

    I’ll say what I said there though – I agree with Phil in his article that there is not enough demand for DC in the datacenter. Companies just aren’t demanding it right now – where as they should be. It saves trees and dollars.

    Two ways to address the issue that I’ve seen here at work are higher quality VRMs (voltage regulators), and higher-efficient power supplies. Most vendors range in efficiency for the PWS between 70-80%. There are motherboards and servers out there that use higher-quality VRMs AND highly-efficient PWS’s. Here is a review of one by AnandTech that achieves over 90% efficiency and reduces the pizza-box footprint in half:

    http://www.anandtech.com/showdoc.aspx?i=2997

    Blade servers are also being addressed by many companies as pointed out earlier, and reports that I have show that a 93% efficient PWS with 10 blades can save 1051 kW/hour per year and over $4,700 per 3 years, for each blade server w/10 blades. Now that’s hard $$ that a company will take to the bank if it would switch to a row of blade servers in the datacenter that had that kind of savings.

  8. @ Ward. Actually this is becoming common place for most web companies especially ones that want to offer services to millions of people. What I wanted to point out with this post was that computing is running into issues like power consumption.

    I think this is going to become more of an issue going forward as we move stuff to the cloud.

  9. @ Matt Terenzio

    That is what I was trying to say – if Grid is the future, the power issue is something we might want to think about “harder.” Or did I mis-understand your comment.

  10. Widespread installation of blades and pizza boxes in most of the outsourced data centers is not doable. The reason being that you can’t cool them with air(assuming a facility with 1000 cabinets of capacity and each one being filled with blades or pizza boxes and consuming 30kw per rack/cab).

    The limits on air cooling hit diminishing return at about 250 watts per foot. Anything past that is going to require an alternative method for cooling….it may be chilled water similar to the mainframes of the past or it may be some liquid based design. Of course there are scenarios where these high density cabinets are running fine in certain data centers but they very isolated and are not the standard configuration of every cabinet in the facility.

    When companies like amazon and google use outsourced facilities and install their cabinets which consume 10kw each, they are typically buying about 5x more space than they need because that is the only way they are able to get that much power and the associated cooling capacity in a fixed resource environment like a data center.

    The problem doesn’t go away with a grid infrastructure…it just shifts from being the responsibility of the customer(the one who is purchasing the grid service) to the operator of the grid platform or to their data center vendor. So while the problem many companies are challenged with today may be solved by closing their own data centers and outsourcing their computing requirements to a grid provider, it doesn’t make the problem disappear, it just compounds it for someone else. Unless I’m mistaken, you can’t change the fundamentals of physics, or more specifically to this case, you can’t force air to cool more than it’s physically capable of doing so you are forced to use alternative techniques and strategies. Hence the talk of bringing chilled water back to the data center floor which is unthinkable to most internet people but was common practice in the past.

Comments have been disabled for this post