44 Comments

Summary:

[qi:gigaom_icon_cloud-computing] Microsoft is spending hundreds of millions of dollars to build out its next generation of data centers to host its cloud computing offering, Windows Azure Platform. While the company is clearly innovative in its data center designs and plans, the true reason behind its push […]

[qi:gigaom_icon_cloud-computing] Microsoft is spending hundreds of millions of dollars to build out its next generation of data centers to host its cloud computing offering, Windows Azure Platform. While the company is clearly innovative in its data center designs and plans, the true reason behind its push toward the cloud may be its ability to turn a commodity product –- bandwidth –- into high gross profits. A quick analysis that we here at Panorama Capital did shows that the commodity business of selling the transfer of bytes may be one of the most profitable parts of running a cloud service.

Azure charges 10 cents for the bandwidth to upload and 15 cents to download a gigabyte of data. The disparity in pricing, I believe, is meant to encourage Microsoft developers to move lots of applications and associated data into Azure and then have lots of users access the applications from the same platform.

Let’s assume that a customer of Azure develops an application that downloads 10 gigabytes of traffic per day, 20 business days a month. That means the application downloads a total of 200 gigabytes of traffic in a month (and to make the point, let’s assume that the upload traffic is minimal). Azure charges the customer $30 per month for this bandwidth use (200 gigabytes times 15 cents per gigabyte), which seems like a small amount to pay.

If the customer’s application is only sending data and consuming bandwidth 12 hours a day –- all of its users are in North America — during 20 business days, that means that the customer is effectively using 1.85 megabits per second of bandwidth during the month (200 gigabytes per month divided by the sum of 20 days times 12 hours in a day times 60 minutes an hour times 60 seconds per minute). Put another way, the customer’s $30 per month equates to a bandwidth charge of $16.20 per megabit ($30 divided by 1.85 megabits per second).

Cloud service providers buy a lot of bandwidth to provide access to the Internet. My market research (albeit not exhaustive) puts the current price per megabit of bandwidth for a large cloud provider at around $8. That means that for the 1.85 megabits per second that the customer uses, Azure effectively pays $14.81 to its bandwidth provider and keeps $15.91, or 51 percent gross profit — not a bad profit margin on a commodity business like bandwidth. To be fair, other cloud service providers, like Amazon and Rackspace, charge similar or higher bandwidth fees and likely make similar gross profits.

While this example is for a fairly limited charge to the customer of Azure, if a customer were to build an application on Azure that generates a lot of bandwidth, then the costs and profits to Microsoft get substantial. For an application that downloads 200 gigabytes per month, the costs are $600 for bandwidth for about 37 megabits per second of usage (again assuming the application only consumes bandwidth half of the day for 20 business days a month). Of that $600, Microsoft pays its service provider $296.30 and keeps $303.70 of gross profit.

And the model scales linearly –- if Microsoft can build up Azure to cumulatively charge for all customers’ applications sending 100,000 gigabytes of data a day, it will reap approximately $151,000 in daily gross profit off bandwidth. That would equate to a lot of copies of Microsoft Windows sold daily without any packaged software developers on staff required. With these gross profit margins, one can easily understand why Microsoft is building thousands of megabits of bandwidth to serve data from Azure applications.

So if you’re using a cloud customer and your application will send a lot of data from the cloud, our analysis indicates that once you’re sending over 50 gigabytes of data daily (or a terabyte a month costing you $150 on Azure, for example), it may make sense to leave the cloud and buy your own bandwidth to the Internet –- you’ll probably save 50 percent of your monthly bandwidth charges. The trick will be moving your application from the cloud to your own infrastructure and dedicated bandwidth and then finding the expertise to manage this environment. Cloud service providers are counting on that being a difficult trick to perform.

  1. I’m not convinced of the arguments in this for three reasons:

    * First, your estimate is heavily based on estimating the traffic pattern for the data — which seems to be done entirely through conjecture. If you are off by a factor of two in utilization or bandwidth cost (quite plausible), then there may be no margin on bandwidth or a 75%.

    * Second, bandwidth is an often compared metric which the providers have to compete on (it’s even easier to compare than compute rates since a bit is largely a bit). Since switching costs are somewhat low, one would guess that the margins on bandwidth are relatively low.

    * Third, even if the margin on bandwidth is 50%, the market is immature and one would expect the numbers to change over time. By argument #2, they are likely to go lower.

    Share
  2. Exactly.

    Because the only cost to serve bandwidth is the bandwidth itself.

    Not the routers. Or the switches. Or the servers. Or the sysadmins. Or the “Generation 4 Modular Data Centers”…

    “it may make sense to leave the cloud and buy your own bandwidth to the Internet –- you’ll probably save 50 percent of your monthly bandwidth charges. The trick will be moving your application from the cloud to your own infrastructure and dedicated bandwidth and then finding the expertise to manage this environment.”

    Right. And finding all that for free. You know, to save 50% on the bandwidth.

    Share
  3. Allan, 151,000 in bw profit a day is 1.5mm profit in 10 days, is 15mm profit in 100 days is 55mm profit in a year. I am skeptical this is a driver for anyone at their scale. Understanding that in some industries, 55m free and clear (before taxes) is rockstar, but I don’t think it is going to fly at the scale of msft. However, the margin is high and if the usage goes up by 10x from your estimate, we are now cooking with gas.

    /vijay

    Share
  4. The Math on the last line in the fourth paragraph seems incorrect. We start with 15c per GB and end up with $16.20 per megabit?????.

    If you do the math, $30 per month divided by 1.85 megabits per second is (30/20/12/60/60/1.85*1000*8) = 15c per GB.

    Share
  5. @Skeumorph – Agreed that there are definitely costs to providing a bandwidth service. Still, this is a commodity service that you should be able to buy at $8/Mbps at volume.

    @vijay – Thanks – as cloud usage grows we may indeed be cooking with gas.

    @Indrajeet – it’s a cost to the customer of $30 and their effective bandwidth usage of the network is 1.85 Mbps for the month. $16.20 is $30/1.85. In your example, you need to start from the same place I did in my example – 200 GBytes transferred for the month.

    Share
  6. There are 3 major flaws in this article:
    1) The cost to a cloud company of providing bandwidth to a customer is much higher than their cost to their upstream providers. Routers, switches, redundancy, accounting, capacity planning, physical circuits, power, etc. make upstream per mbps costs just 1 component.

    2) Apparently there are only 3 cloud service providers, Azure, Amazon and Rackspace… but really there are many, some of which, like NewServers.com, include 2,100 gbytes per month of bandwidth free with each server.

    3) The last paragraph about a “trick” being to move off the cloud makes little sense given the above. Cloud service providers, on average, are just about as competitive (on AVERAGE) with non-cloud servers. But of course you get all the affordable on-demand scaling that you can’t get with non-cloud services, so the economics are swayed far in the cloud direction…

    Share
    1. I agree with Scott, esp.his first point. Having a redundant upstream provider (i.e. being ‘multi-homed’) at the same commit rates for the transit, effectively doubles your cost of the transit alone -assuming that you buy at flat rates, which is becoming more common nowadays than traditional 95 per centile calculation of usage. And then add of course the redundant routers, the redundant data center space -you’re serving critical applications to your customer which imo warrants the ‘extra’ redundancy-, and the bandwidth itself can become a loss maker.

      Also the article doesn’t take into consideration that transit capacity is ‘peak levels’ -meaning that if you have a handful of customers who use their bandwidth intensive applications at the same time, you’re more likely to have to buy more capacity (and also on your redundant upstream link) than you would use if you’d simply divide the actual traffic in a month in gigabytes over time.

      Share
      1. Your point is well made about needing to buy redundancy, but again, I am fairly certain that you can buy bandwidth to a fully redundant tier-1 transit bandwidth at around $8/Mbps. I don’t think that service providers offering this service have priced it at a loss leader as in some cases bandwidth is their main source of revenue.

        Share
  7. I think the part of moving to dedicated hardware leaves the capex+opex costs associated. I dont see that addressed in the analysis above.

    Share
    1. I agree that I did not take into account the hardware associated with providing the bandwidth service. That being said, service providers do offer bandwidth service at $8/Mbps, fully loaded cost. I doubt that they are doing this at a loss. I also doubt that they are getting the profit margins of cloud services providers.

      Share
  8. Bandwidth is only a portion of the COGS that make up the cloud infrastructure and potential margins while there are still quite a few other COGS to include, not least of which is the price per kW for the given infrastructure’s location(s) – most cloud(s) today reside in one or maybe two physical datacenters (is that truly a cloud then?) and while bandwidth is top of mind (many on here have already spoken to this), there are quite a few other COGS to include – hardware, software, sysadmins, netadmins, and the related kW COGS; the latter being the last and least understood today given the emphasis on PUE/DCiE today. Next is the actual migration to the cloud which is, quite frankly, the largest hurdle to overcome as the “cloud” is still a largely mis-understood term that means as much to folks as “managed services” – granted the definitions are becoming more clear, there is still some education that is going on both from the marketplace itself and the cloud providers…cloud(s) today are in that adoption cycle much like datacenter(s) were in the late ninties – right idea, little early insofar as the timing, marketplace is still in the early adopters phase…

    Share
    1. I think the market players are very much to blame for the confusion about what the “cloud” is. The “cloud” is the catchy term right now so everyone is using it to mean different things. Sometimes its just managed services, sometimes just a web service of some sort, sometimes its a development platform. I agree on your points regarding COGS.

      Share
  9. This article is complete trash and 100% conjecture. Don’t you have anything factual and interesting to write about?

    Share
    1. I may think the article is weak, but calling it trash under an “Anonymous” post is pretty slimey. Don’t YOU have anything factual or interesting to write that you could put your name against. Oh wait, you’d rather trash other people’s hard work without even having the balls to put your name against your trashy post. Weak.

      Share
  10. Regardless of whether or not you believe each of the assumptions in Allan’s analysis, the comments illustrate the immaturity of the market’s understanding of cloud & datacenter costs is (at least in terms of public discussion). Typically with market transformations, initial adoption is associated with a crude understanding of the ROI as the technology takes a front seat in serving new markets. As the enterprise begins to consider the cloud more thoroughly, we’ll see the discussion around costs and transformation ROIs improve dramatically. Building a wholistic datacenter profit/cost model will drive further adoption, especially in this environment. Thanks to Alan & Panorama for pushing this conversation forward.
    It’s also clear that Google is driving towards an ideal end-model: a distributed mesh of data centers that sits at elbow on the efficient frontier balancing compute consoldiation and latency minimization. As the technology (& cost structure) for each piece of the pie improves (compute consolidation, network bandwidth); that ideal point should fluctuate. It will be interesting to watch as cloud market players gain and lose leadership positions based on their strategic infrastructure decisions and ability to react to the technology shifts.

    Share
    1. It is true that we could have changed the assumptions to skew the gross profit higher or lower – for example if we assumed that the customers’ application used bandwidth at a steady state for each of the 20 days over the month instead of being idle half of each day.

      I also agree that we need to see a holistic datacenter profit/cost model that people can use to make decisions about outsourcing all or any of their infrastructure.

      Share

Comments have been disabled for this post