On Wednesday, our company GigaOM held a day-long event that dove into the topic of “big data,” or basically the types of applications, tools and networks that are being built to manage the massive amount of data that has emerged via networked computing and which is fundamentally recreating industries from health care to consumer web applications to genomics. From an energy perspective, what I took away from the day’s discussion is that with so much data flowing over networks and with so much computing power needed to crunch that data, the infrastructure needs to be as low power as possible to both make the era of “big data” economical and also more eco-friendly.
You can see a slow increase in attention on lower-power chips, servers, networks and data centers, from Internet companies like Google and Yahoo, startups like Calxeda and Power Assure and investors, too. The mentality is no longer that computing needs to maximize performance regardless of anything else, but there is emerging a new desire to design systems that use just as much power as needed for the necessary computing or network performance. The less power used, the lower a company’s energy bill — and the fewer carbon emissions. (Hear more about low-power computing and data centers from Google and Yahoo at Green:Net 2011 on April 21 in San Francisco).
At our Structure Big Data event, Sun Microsystem and Arista Networks Co-Founder Andy Bechtolsheim, pointed to recent developments in chip design that look at how to minimize power consumption while also getting more bandwidth out of chips. In particular he mentioned Calxeda, which was presenting at Structure Big Data, and which builds servers out of clusters of cell phone chips in order to optimize power efficiency.
As Stacey wrote earlier this month on GigaOM, the Calxeda system on a chip uses an ARM Cortex A9 quad-core and server manufactures can jam 120 ARM quad-core nodes (480 cores) in a 2U enclosure, with an average consumption of about 5 Watts per node. That power consumption includes DRAM. In comparison, Intel and AMD boxes using the x86 architecture can consume about 80 to 130 watts for a quad-core machine, while low-power versions of x86 chips can consume 30 watts.
The idea is that data center operators can save considerable money on their energy bills by installing ARM-based servers. Chip designer Marvell is following suit with ARM chips for servers, and Intel is also developing lower power server chips.
But using ARM for servers is just one innovation. Data center operators are looking to lower the cost of cooling systems and turning to virtualization, while telcos are developing lower power networks.
All the push in performance — and now subsequent trend in low power systems — can thank Moore’s Law. Bechtolsheim said during Structure Big Data that so far Moore’s Law has produced progress that is “without equal in the history of mankind” and which will continue for another 10 or 20 years. That means that the trend towards big data and cloud computing will only continue and accelerate, and the need for low-power computing infrastructure will be even more crucial.
Image courtesy of Randomskk.