2 Comments

Summary:

A friend of mine who runs a very large data center told me recently that his biggest day-to-day challenge is one that is becoming commonplace in his industry: how to consume less energy. Data centers consume vast amounts of energy, and rising energy costs are cutting […]

A friend of mine who runs a very large data center told me recently that his biggest day-to-day challenge is one that is becoming commonplace in his industry: how to consume less energy. Data centers consume vast amounts of energy, and rising energy costs are cutting into both the operating efficiencies and profits of the companies that run these facilities.

Energy in data centers is primarily consumed by two things: the servers and networking devices used to store and process data, and the cooling systems used to keep them running safely. In general, more devices means more power, more cooling and more energy consumed.

The server vendors have already started to respond to the energy consumption needs of data centers by manufacturing servers, equipment racks and blades that dissipate heat extremely well and consume less energy from their power supplies, processors, memory chips and so forth. Some of the designs even utilize liquid cooling, technology reminiscent of the freon cooling in Cray-1.

Data center design itself has started to go eco-friendly as well, through the use of everything from solar technologies to cooling systems that recycle air and water efficiently and make use of green power sources such as hydroelectricity and biodiesel. Some even boast zero carbon emissions. While this is great progress, these technologies generally apply to new data center designs, as refurbishing an existing facility is impractical to the point of being impossible.

Yet with all of the work being done by the server manufacturers and the data center designers to be eco-friendly, data centers remain filled with networking devices that draw considerable power and require significant cooling. Large racks are filled with routers to interconnect networks, switches are racked with servers to provide LAN connections, firewalls and IDS devices are scattered throughout the data center for security, and so on. In fact, in many data centers the servers are denser than the networking devices, but the networking devices are the ones currently consuming the most energy.

Even more striking is that while most servers are Energy Star certified to use less energy and to run in a low power mode when feasible, all the networking devices that I know of require the same energy at all times when they’re in operation. When a switch is forwarding a million packets a second it draws the same power as when it’s passing one packet every few seconds. A firewall uses the same energy to stop one hacker at a time as it does to stop hundreds.

So why can’t networking devices have hardware or software technology that allows them to work in a low power mode when appropriate? One of the answers lies in the fact that many networking devices use proprietary processors and system architectures that are not Energy Star certified – they are designed, in other words, to process packets, not to be energy efficient. Another reason may be that networks are known to be “bursty,” meaning that although the switch is only seeing a few packets at this moment, the next moment it could be asked to forward an instantly hot YouTube video for 48 hours.

While those reasons are understandable, I remain unconvinced that network devices can’t be greener. For networks that have redundant paths and devices, maybe there can be a way for these redundant paths to be disabled (go to sleep) when they are not needed? I can envision changes to routing and spanning tree protocols that would allow redundant paths of a network to go to sleep when they’re not needed. If a burst of traffic does appear, I don’t see an overwhelming technical reason why the network device could not nearly instantly draw more power, activate any paths or ports in sleep mode and forward the burst with some standard buffering techniques.

Further, a network device looking at individual ports could also save energy –- many devices use the same amount of power per port regardless of the physical distance needed to drive a signal between devices. Having ports sense the distance between them (between a switch and a server, for example, or between a DSLAM and a DSL modem) and use the appropriate level of power could save substantial amounts of energy.

Saving energy on network devices means that data centers draw less power and require less energy to be cooled. And that, along with help from server manufacturers, may help my friend with his data center design issues and help save the planet just a bit.

  1. Hi there,

    Our software can help your friend save significant money in energy optimization of the equipment in the lab.

    Could I speak with him?

    Faiz
    CEO
    Paxterra Solutions

    Share
  2. Shridhan Automation is a Manufactures, Exporters & Suppliers a wide variety of Level switches, Level switches for liquids and Level Transmitters for liquids in India.

    Share

Comments have been disabled for this post