21 Comments

Summary:

Facebook has shared the details of its server and data center design, taking its commitment to openness to a new level for the industry by sharing its infrastructure secrets like it has shared its software code.And that data center? It has a PUE of 1.07.

openserver

Facebook has shared the nitty-gritty details of its server and data center design, taking its commitment to openness to a new level for the industry by sharing its infrastructure secrets much like it has shared its software code. The effort by the social network will bring web scale computing to the masses and is a boon for AMD and Intel and the x86 architecture. Sorry ARM.

At a news event today Facebook is expected to release a server design that minimizes power consumption and cost while delivering the right computer workload for a variety of tasks that Facebook does. Unlike Google, which is famous for building its own hardware and keeping its infrastructure advantage close to its vest, Facebook is sharing its server design with the world. Much of the approach mirrors the scaled-down ethos of massive hardware buyers that requires stripped down boxes without redundant power supplies that have hot swappable drives to make repairs and upgrades easier.

But Facebook has added some innovations, such as newer fans that are larger (the entire server is 50 percent taller than the traditional 1 u sized box) and fewer of them (a design tweak introduced by Rackable, which is now SGI). Those fans account for 2 percent to 4 percent of energy consumption per server, compared to industry average of 10 percent to 20 percent. Ready for more? Here are more key details on the server-side:

  • The outside is 1.2mm zinc pre-plated, corrosion-resistant steel with no front panel and no ads.
  • The parts snap together: the motherboard snaps into place using a series of mounting holes on the chassis, and the hard drive uses snap-in rails and slides into the drive bay. The unit only has one screw for grounding. It’s like Container Store does cheap servers and someone at Facebook built an entire server in three minutes.
  • Hold onto your chassis because the server is 1.5u tall about 50 percent taller than other servers to make room for larger and more efficient heat sinks.
  • Check out how this scales. It has a reboot on LAN feature, which lets a systems administrator instantly reboot a server by sending specific network instructions.
  • The motherboard speaker is replaced with LED indicators to save power and provide visual indicators of server health.
  • The power supply accepts both A/C and D/C power, allowing the server to switch to D/C backup battery power in the event of a power outage.
  • There are two flavors of processor with the Intel motherboard offering two Xeon 5500 series or 5600 series processors, up to 144GB memory and an Intel 5500 I/O Hub chip.
  • AMD fans can choose two AMD Magny-Cours 12 and 8 core CPUs, the AMD SR5650 chipset for I/O, and up to a maximum 192GB of memory.

But wait! There’s more. Facebook couldn’t just unleash its server plans to the market. The social networking site has also shared its data center designs to help other startups working at webscale build out their infrastructure in a manner that consumes as little power as possible. Yahoo has also shared its data center plans, with special attention going to its environmentally friendly chicken coop designs and Microsoft has built out a modular data center concept that allows it to build a data center anywhere in very little time.

Facebook has combined that approach in its Prineville, Ore. where it has spent two years developing everything that goes inside its data centers — from the servers to the battery cabinets to back up the servers — to be as green and cheap as possible. For example, Facebook uses fewer batteries thanks to its designs and to illustrate how integrated the whole compute operation is, the house fans and the fans on the servers are coupled together. Motion-sensitive LED lighting is also used inside.

The result is a data center with a power usage effectiveness ratio of 1.07. That compares to an EPA-defined industry best practice of 1.5, and 1.5 in Facebook’s leased facilities.

Some of the server design decisions allow the equipment to run in steamier environments (the Prineville facility runs at 85°F with a 65 percent relative humidity) which in turns lets Facebook rely on evaporative cooling instead of air conditioning. Other innovations are at the building engineering level such as using a 277 volt electrical distribution system in place of the standard 208 volt system found in most data centers. This eliminates a major power transformer, reducing the amount of energy lost in conversion. In typical data centers, about 22 to 25 percent of the power coming into the facility is lost in conversions. In Prineville, the rate is 7 percent.

In the waste not want not category, Facebook is using the warm air from the servers to heat the outside air when it’s too cold as well as the offices. In the summer the data center will spray water on incoming warm air to cool it down. It’s also designed its chassis and servers to fit precisely into shipping containers to eliminate waste in transport. The plan is to run those servers as hard as it can, so it doesn’t have to build out more infrastructure.

The social network has shared the server power supply, server chassis, server motherboard, and the server cabinet specifications and CAD files as well as the battery backup cabinet specification and the data center electrical system and mechanical specification. While not every startup needs to operate at webscale, the designs released by Facebook today certainly will give data center operators as well as the vendors in the computing ecosystem something to talk about. Infrastructure nerds, enjoy.

For more on green data centers check out our Green:Net event on April 21 where we’ll have infrastructure gurus from Google and Yahoo talking about their data center strategies.

  1. This is a very interesting piece but it leaves out the other half of the equation: what type of energy Facebook is using. Right now, it’s coal and nuclear.

    Share
    1. Since they’re not a power company last time I checked, I’m not sure how your comment is relevant.

      Share
    2. I totally agree. If they had only painted the trim on the bike shed door green these ideas would actually be viable.

      Share
    3. And what’s wrong with nuclear?

      Share
      1. Andrew Macdonald Thursday, April 7, 2011

        @Alex B. Ask Japan.

        Share
      2. @Andrew Macdonald Yeah, Prineville is located in Tsunami prone area, right?

        Share
      3. Andrew Macdonald Friday, April 8, 2011

        @Anurag – It’s not about the location, the person simply asked what is wrong with nuclear (nothing about WHERE the plant was).

        I merely pointed out that nuclear has it downsides, as Japan is currently all too aware of.

        Share
    4. It’s like hydroelectric, given its location within the country.

      Share
    5. … and these servers will use *less* coal and nuclear energy than they otherwise would if Facebook were running off-the-shelf boxes. I’m sure that if there was a local renewable source for a similar price it would have been considered.

      Share
  2. “Unlike Google, which is famous for building its own hardware and keeping its infrastructure advantage close to its vest”

    The year 2009 called and begged to differ …

    http://news.cnet.com/8301-1001_3-10209580-92.html

    Share
    1. Google did show off its server and periodically offers a glimpse into its data centers, but it’s not exactly transparent and certainly hasn’t delivered anything at this level.

      Share
  3. Since they’re not a power company last time I checked.

    Share
  4. Interesting move by Facebook, shows it cannot make a profit, though.

    Share
  5. Build a server rack in 3 minutes with no tools…AMAZING.

    Share
    1. No tools ok, but everything almost done.

      Share
  6. Google did release specs on it’s low cost servers in the past.

    Share
  7. Wow thats a nice concept.. But does any body knows that where Google has kept their servers.. What i read from an article on Free classified it said they have used the shipping containers and installed servers in those containers and are some where in middle of desert in USA. The heat from Desert generates Power and electricity to run those. Not sure if that correct or no but this is what i read.

    Share
  8. Of course Novell was heating their offices in Provo back in 1995 without all the publicity seeking fanfare.

    Share
  9. Vladimir Rodionov Friday, April 8, 2011

    Making servers and data centers to be more power efficient and environmentally friendly is great, of course but using more optimized software technologies can save you much more. If they will increase performance of all their applications by 100% they can cut down the number of servers by 50%. And that is a huge saving (and probably will cost less than building a new data center to increase capacity)

    Share
  10. So are my status updates now gonna be green in color?

    Share
  11. Impressive PUE

    Share

Comments have been disabled for this post