In an abrupt change of IT strategy, General Motors (GM) is following the examples of Google and Facebook and building its own mega data centers. It opened a data center in Warren, Mich., in May, and the automaker will complete a similar second mega data center in Milford, Mich., by 2015.
A mega data center differs from a traditional one by operating on a huge number of low-cost servers, typically 10,000 or more. The explosion of on-board vehicle electronics coupled with GM’s unique need for reliability is driving its requirement for mega data center scale. Because it previously outsourced 90 percent of its IT needs, the auto giant has an almost clean-slate opportunity to do something radically new. The much lower capital and operating costs of the mega data center are certainly major attractions. Freeing up resources here would allow GM to re-deploy them where they are really needed, which is for developing more innovative products and services more quickly.
Will other Fortune 100 companies follow GM’s lead? Should they?
Enterprise chief information officers (CIOs), vice presidents of operations, data center managers, and vendors, including data center infrastructure management providers, public cloud vendors, and outsourcers, need to watch GM and learn from its progress in order to answer the above questions for themselves.
The key feature behind the mega concept is not just thousands of low-cost servers but also the new management layer and application designs. These monitor the servers, partition work, and complete workloads, regardless of where and when underlying failures occur. This is far beyond just another private cloud.
Note that GM is not implementing a “pure” mega-style data center. Significant legacy systems, mainframes, and traditional vendors also will run in its two new data centers.