The states with the most data centers are also the most disaster-prone [maps]


As we saw with Hurricane Sandy, natural disasters can wreak havoc on data centers and the companies that rely on them. And unless we start distributing more of our data load to safer areas, we may need to brace for more of the same.

Internet companies build data centers across the country so people and companies in those areas can send and receive information at millisecond speeds. The closer the data center is to the user, the faster the data is transmitted. Data centers tend to be more heavily concentrated in more populous areas because that’s where the user base is heaviest.

But many of the states that have the most data centers are also the states that get hit with the most natural disasters, according to data from the Federal Emergency Management Agency and Data Center MapsFEMA disaster declarations occur when the magnitude and cost of a disaster — including floods, hurricanes, tornadoes, fires and winter storms—outstrip the capabilities of the state and local governments, requiring the governor of that state to ask for federal assistance.



Data sources: Data Center Map and FEMA

So why not just move data centers to safer areas?

Data centers are energy-intensive and expensive to build — even more so in metropolitan areas. But for sub-millisecond transactions like those required by the New York Stock Exchange, data centers need to be as close to the action as possible, regardless of cost. To obtain those lightning fast, one- to two-millisecond speeds, data centers have to be located within a 50-mile radius and often much closer to their source. During Hurricane Sandy, that put low-lying New York City data centers in the path of the storm surge.

But, according to Mark Thiele, executive VP of data center tech at Switch (which, should be noted, is located in disaster-light Las Vegas), most other data centers could stand to be much greater distances from their source than those that serve Wall Street. “For primary apps that don’t require such low latency, the vast majority can be 50 or 60 milliseconds away from the customer without them noticing,” he says. That means data centers nearly on the other side of the country could transmit data without a noticeable delay. Additionally, he estimates that fewer than 5 percent of IT organizations have a critical need for speeds like those at the Stock Exchange.

Indeed, companies like Google(s goog), Yahoo(s yhoo) and Facebook(s fb), which run a single application across a number of data centers, have been able to cut costs and avoid outages all without a noticeable lag. To serve up 1 trillion monthly pageviews, Facebook fields data centers in less populous, cheaper and safer locations like North Carolina, Oregon and Sweden, as well as leased spaces in data centers around the country.

As GigaOM noted in its year-end cloud coverage, the solution to balancing cost and latency issues could come from prioritizing parts of the data load. Important data loads would direct to nearby centers, while less important data would go to cheaper—and/or safer—data centers far off. In the case of Sandy, with its 1,100-mile diameter, data centers in the middle of the country could keep companies online, even if their headquarters aren’t.


Lauren Tucker

Hi Rani,
Thank you for your article. Unfortunately, the statistics provided assume there is the same threat across the entire state–which is not the case. For instance, here in Orlando we have far fewer risks than our costal neighbors.

Also, it is important to acknowledge that in all of these FEMA highlighted states there are well constructed data centers which are engineered and operated in accordance to the potential threats of the geography.

As we all know, disasters can happen anywhere and being prepared for what ‘might’ occur puts a data center in far better standing than trying to combat a suprise event.

In-Ho Lee

Rani, thanks for raising this perspective.

As we work with clients who are actively sourcing new data center locations, we agree that consideration for local/regional hazards should be a factor. However, it’s just that: a single factor in a what is often a much more complex decision.

Even when solely focusing on things that impact availability, rather than considerations for cost and performance, you’re still left with a pretty large assortment of factors. How well suited are customer processes for things like maintenance (both planned and unplanned) for remote facilities? If any 3rd party services are used for on-site service, will selection of one city over another impact their ability to respond? Every customer is different.

Bottom line, even though tools/services, models and practices are widely used to allow remote locations to be every bit as available as a deployment down the street, not every customer is ready to incorporate the changes needed to get there. I see a lot of customers who insist they’ll be running “lights out”. Some of them actually do, but others end up spending more time inside the data center than I do. Choice of a (nearby) location for many of them can be the one factor that determines how well they’ll be able to manage and mitigate certain events that would otherwise impact availability.

When we bring performance considerations into the picture, I’d disagree that 50-60 ms is not noticeable. That’s equivalent to adding nearly 2x the serialization delay on a 64KB window over a T1. Any TCP traffic will suffer with a noticeable throughput reduction. This is made even worse by the lack of peering infrastructure in many secondary markets. Rather than downplaying the performance disadvantages, I’d give a nod to CDN services. Once expensive and the domain of large scale properties requiring static offload, CDN services now include capabilities like presentation layer acceleration and cost no more than IP bandwidth. The performance uplift from CDNs can be an effective way to offset disadvantaged locations.

Vincent Pelly

Great article, and some very good comments. For financial services organizations that run business critical apps needing very low latency (trading, price feeds, etc) having facilities close to market/trading venues is an absolute requirement, but also these firms need to consider their back-up strategy for back-office operations, clearing and settlement transactions- all key to sustaining financial services operations and liquidity markets. Our opinion has always been to architect local recovery with high available solutions coupled with remote (greater then 1k from primary host) async data copy to protect data and enable operations. It all boils down to the business requirements and risk to the business, loss of revenue, etc.
Vincent Pelly
Associate Partner


I think it would be far more useful to display a heatmap of how many datacentres were taken offline or otherwise affected by these “disasters”, instead of just the simple number of them. After all, FEMA considers such things as ‘Drought’, ‘Extreme Temperatures’, ‘Virus Threat’, and ‘Wildfires’ as disasters, all of which have a basically zero potential effect on data centre operations. Of course that’d require more than a cursory look at the available data.

It appears from this link:

That you’ve included ‘fire suppression assistance’ in determining which states had the most ‘disasters’, which is disingenuous to include as it relates to data centres, as these are typically wildfires that have virtually no chance to affect data centre operations in major cities. If you removed that statistic and focused solely on ‘Major Disaster Decs’ and ‘Emergency Decs’ all states would even out much more closely, and the dichotomy between risk vs reward of keeping data centres in major population areas wouldn’t be nearly as extreme as you’re attempting to portray.

Not to mention that this whole game of federal assistance is heavily influenced by politics and politicians trying to show they are doing what they can – often this ‘federal assistance’ simply comes down to the federal government just paying for things like overtime and additional equipment, and very rarely involves FEMA putting boots on the ground. In other words, a vast majority of these ‘disasters’ are nothing more than minor financial disasters for the state, not something serious.


New York has a lot of data centers but almost all of them are in upstate NY which is quite stable. The same augments can be made for California and Texas.

Sean Mulvihill

FEMA declares disasters primarily where high concentrations of people live, since there is rarely a need to issue disaster declaration in places where there are small concentraitons of people. There is also a funding factor. When a disaster is declared, the amount of help is usually tied to the concentration of people affected. Data centers are located in high concentraitons of people because they demand skilled labor but mostly because, like the NY Stock Exchange example, the closer to the people the better the performance. Here’s a good way to test the uniqueness of the charts above. Swap out ‘where data centers are’ with ‘where people are’. You’ll see the same thing. Disasters, if measured not by affect on populations and instead on some other factor, would likely show up in every state. Nature, left alone from these kinds of gratuitous data comparisons, is disasterous everywhere!

Rani Molla

Hello Sean,
I see what you mean: FEMA disaster declarations tend to be in areas with high numbers of people because obviously more people are affected. What I was hoping to show with this article and charts is that data centers don’t necessarily have to be in one location or in especially dangerous locations.

Comments are closed.