3 Comments

Summary:

After launching an open server and data center design in April, Facebook is prepping for version 2.0 of its hardware, and huge server buyers are playing along. From Rackspace to major financial services companies, big hardware buyers are getting into Open Compute.

Facebook Chat

After launching an open server and data center design in April, Facebook is prepping for version 2.0 of its hardware, and huge server buyers are playing along. Within three months, a foundation supporting the program will be created, according to Frank Frankovsky, Facebook’s director of hardware and supply chain. From Rackspace to major financial services companies, big hardware buyers are getting into Open Compute: testing the hardware and suggesting changes to help fuel its adoption.

In April, the social network teamed with big names from the hardware ecosystem and showed off two years of work developing a greener, leaner data center and server design for its Prineville, Ore., facility. But a week and half ago, it invited engineers and folks from large IT shops to its campus to figure out how to improve that design and suggest modifications that include doubling the compute in a server and the introduction of an open storage box.

The changes are outlined in a Facebook posting from Wednesday, but I also discussed the meeting with Frankovsky at our Structure 2011 event, as well as with Bret Piatt, director of corporate development from Rackspace, which presented on some of its findings on the original Open Compute server. While the testing process is still incomplete for Rackspace, the company says it’s encouraged that the next version of the server will have broader applicability, which was a complaint soon after the original launched.

“The Facebook team presented the 2.0 design for a broader set of use cases,” Piatt said. “They designed for a cloud use case, but we have public and private cloud, and server platforms to support.” For Rackspace, features such as connecting to storage and having an ability to run dedicated apps on the gear, as opposed to running servers with operating systems or even virtual machines on them, are important.

Part of my conversation with Piatt was about understanding how to think of the give and take between companies when discussing open hardware. Unlike with open-source code, Open Compute will result in thousands of boxes delivered to whomever uses it, which means the adoption cycle is likely longer and the testing more rigorous. “When you order 10,000 servers, you end up with 10,000 servers,” noted Piatt. “It’s not like something you can just uninstall.”

Lew Moorman of Rackspace (far left) and Frank Frankovsky (second from right) discussing open everything at Structure 2011.

The end goal also isn’t really a free — or cheaper — end product that someone is able to innovate on if they so desire and don’t mind forking the code. Instead, the end goal is about smoothing the acquisition and operations of a lot of hardware. Much like some restaurants rely on Sysco  prepared food to serve their customers and cut prep time, large IT buyers are hoping Open Compute lets them focus on the customer experience and perhaps a few signature dishes. Piatt explains that if Rackspace suddenly needs a few thousand servers today, it can blindside a vendor who may not have the production capacity set up.

However, with Open Compute, vendors can have lines dedicated to that specification and should theoretically be able to meet demand with less of a problem. It also helps large data center operators by keeping certain features and sizes standard across platforms, thus reducing the parts inventory that companies have to maintain. That also cuts down on employee training because systems administrators don’t have to be trained on a variety of new parts. It’s a lot easier to maintain one make and model of a car than to learn how to support and maintain 20 models.

Those advantages are attracting interest from beyond the cloud computing world. Frankovsky said several large banks are looking at contributing to the Open Compute specifications and said they may even come up with their own version of it to serve the financial services community. Many features would stay the same in order for the financial services firms to accrue the same benefits to their IT operations, but they may ask for a few modifications for their industry. Piatt compared such an effort to compiling software for different instruction sets — a software vendor keeps most of the program the same but has to make changes depending on the platform it is running on.

Of course, this brave new world of making servers a true commodity cuts the margins to the bone for large vendors, especially if some of their more lucrative clients in the financial services and cloud arenas adopt it. Now that Facebook is working on the second iteration of the Open Compute gear, keep an eye on Dell, HP and IBM for their input and reaction.

You’re subscribed! If you like, you can update your settings

  1. George Magklaras Tuesday, June 28, 2011

    When you order 10,000 servers, you end up with 10,000 servers. It’s not like something you can just uninstall.” :-)

  2. George Magklaras Tuesday, June 28, 2011

    I still do not understand why they insist on having AC->DC in every PC. They should have DC all the way to the compute rack.

    1. The answer is (mostly) the price of copper (Cu). In a large datacenter the distance from batteries to server is long, requiring large gauge wire to carry the current. Nevermind the physical constraints under the floor or above rack to get power where it needs to go. The cost of loss through conversions is still less then the cost of running copper. If you concede that the cost of copper mining is closely tied to the cost of energy, the curves will continue to rise/fall about the same rate. IMO, the trick is minimizing or eliminating the number of conversion between AC and DC. This requires a different approach on how you handle failures from the utility.

Comments have been disabled for this post