Web applications that are deployed in one or a few data centers can watch their bandwidth costs exceed their server and hosting costs as the applications scale up, according to a paper written for an Alcatel-Lucent publication. The paper looked at what telecommunications companies, such as Verizon or CenturyLink, can offer as cloud providers. The consensus was that they can offer better control of the network, but it also assumed that these providers have more widely distributed data centers, which also has its benefits.
Those distributed data centers might become an advantage for webscale applications that are concerned about latency or bandwidth costs. One of the paper’s co-authors, Joe Weinman, is a great source of information on the economics of delivering cloud services, so I always look forward to reading his stuff. Although this paper is pushing a service provider cloud (which would be Alcatel-Lucent’s target demographic), I thought the graphs below were worth sharing. From the paper:
While a distributed data center architecture isn’t right for every service provider, Alcatel-Lucent Bell Labs’ modeling results confirm it’s cost-effective. In one scenario, Bell Labs examined the relationship between bandwidth consumed per-subscriber for a data center-hosted application and the cost to deliver it. The model shows the cost of bandwidth per subscriber eventually exceeds the cost of operating the distributed data center.
The paper also discusses virtualized data center fabrics, which might bring joy to Juniper and its QFabric efforts as well as all the other vendors pushing fabrics, but also drives home the idea that server virtualization is not enough, and virtualizing the networking layer must happen.