7 Comments

Summary:

There has been a lot of talk about consolidation lately because federal agencies have until Oct. 7 to present their plans for slashing data center footprints by 38 percent by 2015. But how exactly the government will pull this off is still up for debate.

empty data center

There has been a lot of talk about consolidation lately because federal agencies have until Oct. 7 to present their plans for slashing data center footprints. The Office of Management and Budget has mandated that by 2015, the government must reduce its current stable of roughly 2,094 data centers by 800, or 38 percent of its total data center count. But how exactly the government will pull this off, and how successfully it can do so, are still up for debate.

A survey released this morning by Juniper Networks (conducted by federal IT think tank MeriTalk) suggests the OMB has set a rather lofty goal. Of the 200 federal IT executives surveyed, only 10 percent think the government will meet its goal; 23 percent think the government actually will have more data centers. The discrepancy between the OMB mandate and stakeholder predictions appears based in large part on two major factors: complexity and demand.

Legacy apps need legacy homes

Complexity is an issue, because the government runs so many legacy applications. It would be a lot easier to consolidate operations and virtualize infrastructure or move workloads to the cloud if existing applications didn’t require custom-built stacks that have been in place for decades, in some cases, and that aren’t particularly well suited for new environments.

Sixty percent of survey respondents said more than 20 operating systems are running in their data centers, while 16 percent said they’re managing more than 100. It’s the same story for management software, with 48 percent claiming more than 20 applications in use, and with 6 percent saying more than 100.

Unless agencies are willing to rewrite their applications to take advantage of new application environments and tools, the result might be fewer, but highly complex, data centers that are a nightmare to manage. This would go against the goal of standardization that drives so many consolidation, cloud computing and virtualization efforts. Standard application stacks and hardware resources make it much less expensive to buy, operate and provision IT resources.

Already, at least two major federal IT hotbeds — NASA and the Department of Defense — have deployed their own cloud infrastructures to standardize the development and management of new applications among their users.

However, as a Juniper representative explained to me via email, “The difficulty is that the applications drive the infrastructure, and in Federal agencies many of the applications are custom-coded legacy applications … The expense to rewrite these applications inhibits the ability of many agencies to consolidate down to a fully standardized, commodity Intel-based server infrastructure.”

More demand, fewer data centers?

There’s also the problem of demand — that is, handling ever-growing compute and data capacity with less space. Respondents of the Juniper survey estimated that they’re currently operating at 61 percent utilization, and will need to increase data center infrastructure by 34 percent in the next five years in order to meet an increased demand.

Some of this increased demand will no doubt be offloaded to the cloud, as the government already has a “cloud-first” policy in place for deploying new infrastructure, and Apps.gov is up and running as a hub for procuring cloud-based applications and infrastructure.

But for applications that can’t run in the public cloud for any number of security or technological reasons, the answer might be an even greater reliance on co-location providers. During a call last week, Equinix GM of the Global Enterprise Segment Greg Adgate told me that although it makes a lot of sense for certain agencies to deploy applications in cutting-edge data centers that can meet performance and scalability needs, getting funding to build such a facility won’t be easy. That means a ripe opportunity for companies like Equinix that can offer co-location space directly and/or that host service providers certified to meet federal compliance standards.

Even if government agencies don’t meet their government mandates, though — and even if there actually are more data centers, as some predict — all isn’t necessarily lost. To the extent that eliminating energy costs is among the reasons for data center consolidation in the first place, research released this morning from Stanford professor Jonathan Koomey gives reason for optimism. According to Koomey’s findings, there’s plenty of room for improvement in data center efficiency that could result in drastically reduced energy usage.

Because of its complex applications and strict compliance requirements, the government won’t likely follow Google’s lead of building custom servers and implementing innovative data-center-cooling methods, but it certainly should take some lessons. Koomey’s research estimates that although Google accounts for 0.8 percent of the world’s data center infrastructure, it only accounts for .011 percent overall data center energy usage.

However, it might be too early to make accurate predictions as to whether agencies will meet the OMB mandate, or what steps they’ll take to save money on operations if they can’t hit the consolidation mark. Equinix’s Adgate said the first pass will be relatively pain-free because the government operates many data centers that “aren’t data centers as we know them” — telco closets, retrofitted offices, and other infrastructure caches — that will be easy to lop off. It’s the infrastructure and applications running in actual data centers that will be more difficult to move.

Then there’s the resignation of Federal CIO Vivek Kundra, who has been a champion of cloud computing and has been pushing consolidation since he took the office in 2009. It’s conceivable the appetite for using cloud resources — as well as for Kundra’s other progressive IT strategies — could diminish with his departure, as well as with the possibility of a new administration taking over in 2013.

Feature image courtesy of Flickr user vaxomatic.

  1. Government always grows, never shrinks. Until it is dead, just like a cancerous tumor.

    Share
  2. Consolidation only increases vunerability to sensitive data getting into the wrong hands. In other words this consolidation will enhance unwanted hacking or virus introduction.

    Share
  3. Consolidation seems to have worked for private enterprise. What with technology advance, cost reduction with it and power cost increase, we could have a win – win. It will require an excellent analysis and project plan, good.

    Share
  4. curiousgeorge Tuesday, August 2, 2011

    Reducing power usage can only go so far if there are still servers running one app. Whats needed is a decision on whether there is a future in keeping a certain OS or app. Bite the bullet? Kill it and see if anyone cares?

    Share
  5. I wonder what the 61% utilisation represents?
    I doubt very much that it’s possible to tie back useful compute efforts to infrastructure that supports them.

    Does it mean that inefficient software has a ‘better’ utilisation?
    I measured power consumption in some large private sector dcs and found that 25% of the power was being used by machines that were >4 years old (ie had been written off). I’m sure that it’s worse in govt, where apps can be decades old.

    Share
  6. Government efficiency?

    Share
  7. Well, finally the government is doing what private businesses have done for the last 10 years very successfully. The entire theme is around becoming lean and agile. I’m surprised that so many IT managers still don’t know about the lean movement and can’t realize its benefit. I recommend that all government IT managers read this book: “Lean, Agile and Six Sigma IT Management”, and catch up to the latest movement in IT at: https://www.createspace.com/3361323

    Share

Comments have been disabled for this post