Microsoft today is expected to announce a research and development program called Cloud Computing Futures that aims to look at how the data centers underlying cloud computing can operate as efficiently as possible. The idea behind this year-old effort that will emerge from stealth mode at Microsoft’s TechFest event in Redmond, Wash., today is to save energy and also reconsider the way data centers are designed depending on the applications they are trying to run.
On the energy side, Microsoft plans to announce that it has cut the power requirements for chips inside servers running some workloads to 10-20 watts per node as compared to 130-150 watts for an average node or 85 for a newer, power-saving server.
Daniel Reed, Microsoft’s scalable and multicore computing strategist, says some of energy use reduction comes from running workloads such as search on servers that use Intel’s low-power Atom processors. Other strategies involve eliminating redundant power for nodes running workloads that can tolerate the occasional loss of a server. This eliminates the need to perform a lot of voltage conversions that can waste power.
Another research effort Reed will showcase is software that performs real-time measurement of server utilization and adapts the workloads to maximize energy efficiency. Hewlett-Packard has a similar research area that measures this, as does Intel. Other areas of research include the use of solid state drives in the data center, the use of optical interconnects on chips and optimizing chips for memory and I/O rather than sheer speed.
Reed is a supercomputing expert who helped develop the National Science Foundation’s TeraGrid, a shared high-performance computing program. He is also involved in Microsoft’s partnership with Intel to develop software for multicore computers. The result of the Cloud Computing Futures research will be put into practice at Microsoft’s data centers as well as in Microsoft’s Azure cloud computing service.