Summary:

P2P datacenter management Is P2P the answer to increasingly complex combinations of devices in hosting facilities? It was not too long ago that a 19-year-old frat guy named Shawn Fanning, decided he wanted to share his music collection with his friends. Working for weeks at a […]

P2P datacenter management Is P2P the answer to increasingly complex combinations of devices in hosting facilities?

It was not too long ago that a 19-year-old frat guy named Shawn Fanning, decided he wanted to share his music collection with his friends. Working for weeks at a stretch, he wrote the software now known as Napster. The 1-MB piece of software proved to be so powerful that more than 30 million users were soon swapping their entire MP3 collections over the Internet.
Napster made peer-to-peer (P2P) part of the popular lexicon, and at the same time also validated large-scale distributed computing. Napster was not about sharing music, but about sharing computing resources, tying them together through the Internet, and managing them through a single or many servers.

If one scales this model down and uses it within the four walls of a hosting facility, the similarities become obvious. To some this means P2P networking (a type of distributed computing) holds the key to the future of datacenter management. Christine Axton, an analyst with Ovum (www.ovum.com), a London-based research group, believes P2P and other distributed computing is about to come into its own, especially when it comes to datacenters. “Distributed computing vendors are now looking for new applications for their products,” says Axton.

“With the new datacenter infrastructure, this is the next logical step,” says Andrew Schroepfer, president and cofounder of Tier 1 Research (www.tier1research.com). With new blade servers becoming a norm in datacenters today, the complexity within a hosting facility is only going to increase. Add attached storage devices, routers, switches, and Infiniband-related equipment, the datacenter is becoming a heavily populated environment, which means there is an urgent need to have software for managing the facility.

“For large, established hardware suppliers, peer-to-peer has the potential to drive sales of associated software and services. Sun and IBM have recognized peer-to-peer could represent a new computing model for the Internet and, as such, is something they cannot afford to ignore,” according to Axton.
“The biggest stumbling block, in the way of computing utility and virtualization, is the management aspect,” says Sanjay Dhawan, cofounder of Inkra Networks (www.inkra.com).

Rather than wait for these giants to seize the initiative, certain start-ups are plowing ahead and developing solutions that can help manage the datacenter. “With 300 blade servers per rack, you have to change the way these servers are managed,” says Rick McEachern, chief marketing officer of Jareva Technologies (www.jareva.com). Dell is one of Jareva’s first customers, using remote provisioning, deployment, and configuration technologies in Jareva’s Elemental IT Automation Platform (ITPAP) Server in its OpenManage systems management portfolio. This enables Dell to automatically provision and manage any supported OS and software stack on any compatible hardware, at any time, from any browser.

Many believe P2P still has a way to go before being viable for most companies. Improvements are still being explored in academia and as part of large-scale resource sharing software, such as the de facto standard Globus Toolkit and high-end supercomputing projects like Beowulf. Raj Kumar Buyya, a Ph.D student at the School of Computer Science and Software Engineering at Monash University in Australia, believes clustering solutions such as the ones from Globus and Beowulf have the key ingredients to be adopted in the datacenter.
In a seminal paper, “The Physiology of the Grid,” Ian Foster and Steven Tuecke of the Argonne National Laboratory and the University of Chicago, along with Carl Kesselman (University of Southern California) and Jeffrey Nick (IBM), laid out the future of grid computing and hosting environments.

They have developed the “Open Grid Services Architecture,” which attempts to simplify and standardize how grid computing can be easily translated into service-oriented architecture. These are early days in the development of the datacenter operating systems, but if the work of these academics is any indication, it will not be long before P2P is thoroughly developed for the management of large hosting facilities.

By Om Malik

You're subscribed! If you like, you can update your settings

Related stories

Comments have been disabled for this post