44 Comments

Summary:

[qi:gigaom_icon_cloud-computing] Microsoft is spending hundreds of millions of dollars to build out its next generation of data centers to host its cloud computing offering, Windows Azure Platform. While the company is clearly innovative in its data center designs and plans, the true reason behind its push […]

[qi:gigaom_icon_cloud-computing] Microsoft is spending hundreds of millions of dollars to build out its next generation of data centers to host its cloud computing offering, Windows Azure Platform. While the company is clearly innovative in its data center designs and plans, the true reason behind its push toward the cloud may be its ability to turn a commodity product –- bandwidth –- into high gross profits. A quick analysis that we here at Panorama Capital did shows that the commodity business of selling the transfer of bytes may be one of the most profitable parts of running a cloud service.

Azure charges 10 cents for the bandwidth to upload and 15 cents to download a gigabyte of data. The disparity in pricing, I believe, is meant to encourage Microsoft developers to move lots of applications and associated data into Azure and then have lots of users access the applications from the same platform.

Let’s assume that a customer of Azure develops an application that downloads 10 gigabytes of traffic per day, 20 business days a month. That means the application downloads a total of 200 gigabytes of traffic in a month (and to make the point, let’s assume that the upload traffic is minimal). Azure charges the customer $30 per month for this bandwidth use (200 gigabytes times 15 cents per gigabyte), which seems like a small amount to pay.

If the customer’s application is only sending data and consuming bandwidth 12 hours a day –- all of its users are in North America — during 20 business days, that means that the customer is effectively using 1.85 megabits per second of bandwidth during the month (200 gigabytes per month divided by the sum of 20 days times 12 hours in a day times 60 minutes an hour times 60 seconds per minute). Put another way, the customer’s $30 per month equates to a bandwidth charge of $16.20 per megabit ($30 divided by 1.85 megabits per second).

Cloud service providers buy a lot of bandwidth to provide access to the Internet. My market research (albeit not exhaustive) puts the current price per megabit of bandwidth for a large cloud provider at around $8. That means that for the 1.85 megabits per second that the customer uses, Azure effectively pays $14.81 to its bandwidth provider and keeps $15.91, or 51 percent gross profit — not a bad profit margin on a commodity business like bandwidth. To be fair, other cloud service providers, like Amazon and Rackspace, charge similar or higher bandwidth fees and likely make similar gross profits.

While this example is for a fairly limited charge to the customer of Azure, if a customer were to build an application on Azure that generates a lot of bandwidth, then the costs and profits to Microsoft get substantial. For an application that downloads 200 gigabytes per month, the costs are $600 for bandwidth for about 37 megabits per second of usage (again assuming the application only consumes bandwidth half of the day for 20 business days a month). Of that $600, Microsoft pays its service provider $296.30 and keeps $303.70 of gross profit.

And the model scales linearly –- if Microsoft can build up Azure to cumulatively charge for all customers’ applications sending 100,000 gigabytes of data a day, it will reap approximately $151,000 in daily gross profit off bandwidth. That would equate to a lot of copies of Microsoft Windows sold daily without any packaged software developers on staff required. With these gross profit margins, one can easily understand why Microsoft is building thousands of megabits of bandwidth to serve data from Azure applications.

So if you’re using a cloud customer and your application will send a lot of data from the cloud, our analysis indicates that once you’re sending over 50 gigabytes of data daily (or a terabyte a month costing you $150 on Azure, for example), it may make sense to leave the cloud and buy your own bandwidth to the Internet –- you’ll probably save 50 percent of your monthly bandwidth charges. The trick will be moving your application from the cloud to your own infrastructure and dedicated bandwidth and then finding the expertise to manage this environment. Cloud service providers are counting on that being a difficult trick to perform.

You’re subscribed! If you like, you can update your settings

  1. I’m not convinced of the arguments in this for three reasons:

    * First, your estimate is heavily based on estimating the traffic pattern for the data — which seems to be done entirely through conjecture. If you are off by a factor of two in utilization or bandwidth cost (quite plausible), then there may be no margin on bandwidth or a 75%.

    * Second, bandwidth is an often compared metric which the providers have to compete on (it’s even easier to compare than compute rates since a bit is largely a bit). Since switching costs are somewhat low, one would guess that the margins on bandwidth are relatively low.

    * Third, even if the margin on bandwidth is 50%, the market is immature and one would expect the numbers to change over time. By argument #2, they are likely to go lower.

  2. Exactly.

    Because the only cost to serve bandwidth is the bandwidth itself.

    Not the routers. Or the switches. Or the servers. Or the sysadmins. Or the “Generation 4 Modular Data Centers”…

    “it may make sense to leave the cloud and buy your own bandwidth to the Internet –- you’ll probably save 50 percent of your monthly bandwidth charges. The trick will be moving your application from the cloud to your own infrastructure and dedicated bandwidth and then finding the expertise to manage this environment.”

    Right. And finding all that for free. You know, to save 50% on the bandwidth.

  3. Allan, 151,000 in bw profit a day is 1.5mm profit in 10 days, is 15mm profit in 100 days is 55mm profit in a year. I am skeptical this is a driver for anyone at their scale. Understanding that in some industries, 55m free and clear (before taxes) is rockstar, but I don’t think it is going to fly at the scale of msft. However, the margin is high and if the usage goes up by 10x from your estimate, we are now cooking with gas.

    /vijay

  4. The Math on the last line in the fourth paragraph seems incorrect. We start with 15c per GB and end up with $16.20 per megabit?????.

    If you do the math, $30 per month divided by 1.85 megabits per second is (30/20/12/60/60/1.85*1000*8) = 15c per GB.

  5. Allan Leinwand Friday, July 17, 2009

    @Skeumorph – Agreed that there are definitely costs to providing a bandwidth service. Still, this is a commodity service that you should be able to buy at $8/Mbps at volume.

    @vijay – Thanks – as cloud usage grows we may indeed be cooking with gas.

    @Indrajeet – it’s a cost to the customer of $30 and their effective bandwidth usage of the network is 1.85 Mbps for the month. $16.20 is $30/1.85. In your example, you need to start from the same place I did in my example – 200 GBytes transferred for the month.

  6. Scott Mueller Friday, July 17, 2009

    There are 3 major flaws in this article:
    1) The cost to a cloud company of providing bandwidth to a customer is much higher than their cost to their upstream providers. Routers, switches, redundancy, accounting, capacity planning, physical circuits, power, etc. make upstream per mbps costs just 1 component.

    2) Apparently there are only 3 cloud service providers, Azure, Amazon and Rackspace… but really there are many, some of which, like NewServers.com, include 2,100 gbytes per month of bandwidth free with each server.

    3) The last paragraph about a “trick” being to move off the cloud makes little sense given the above. Cloud service providers, on average, are just about as competitive (on AVERAGE) with non-cloud servers. But of course you get all the affordable on-demand scaling that you can’t get with non-cloud services, so the economics are swayed far in the cloud direction…

    1. I agree with Scott, esp.his first point. Having a redundant upstream provider (i.e. being ‘multi-homed’) at the same commit rates for the transit, effectively doubles your cost of the transit alone -assuming that you buy at flat rates, which is becoming more common nowadays than traditional 95 per centile calculation of usage. And then add of course the redundant routers, the redundant data center space -you’re serving critical applications to your customer which imo warrants the ‘extra’ redundancy-, and the bandwidth itself can become a loss maker.

      Also the article doesn’t take into consideration that transit capacity is ‘peak levels’ -meaning that if you have a handful of customers who use their bandwidth intensive applications at the same time, you’re more likely to have to buy more capacity (and also on your redundant upstream link) than you would use if you’d simply divide the actual traffic in a month in gigabytes over time.

      1. Your point is well made about needing to buy redundancy, but again, I am fairly certain that you can buy bandwidth to a fully redundant tier-1 transit bandwidth at around $8/Mbps. I don’t think that service providers offering this service have priced it at a loss leader as in some cases bandwidth is their main source of revenue.

  7. I think the part of moving to dedicated hardware leaves the capex+opex costs associated. I dont see that addressed in the analysis above.

    1. I agree that I did not take into account the hardware associated with providing the bandwidth service. That being said, service providers do offer bandwidth service at $8/Mbps, fully loaded cost. I doubt that they are doing this at a loss. I also doubt that they are getting the profit margins of cloud services providers.

  8. Richard Donaldson Saturday, July 18, 2009

    Bandwidth is only a portion of the COGS that make up the cloud infrastructure and potential margins while there are still quite a few other COGS to include, not least of which is the price per kW for the given infrastructure’s location(s) – most cloud(s) today reside in one or maybe two physical datacenters (is that truly a cloud then?) and while bandwidth is top of mind (many on here have already spoken to this), there are quite a few other COGS to include – hardware, software, sysadmins, netadmins, and the related kW COGS; the latter being the last and least understood today given the emphasis on PUE/DCiE today. Next is the actual migration to the cloud which is, quite frankly, the largest hurdle to overcome as the “cloud” is still a largely mis-understood term that means as much to folks as “managed services” – granted the definitions are becoming more clear, there is still some education that is going on both from the marketplace itself and the cloud providers…cloud(s) today are in that adoption cycle much like datacenter(s) were in the late ninties – right idea, little early insofar as the timing, marketplace is still in the early adopters phase…

    1. I think the market players are very much to blame for the confusion about what the “cloud” is. The “cloud” is the catchy term right now so everyone is using it to mean different things. Sometimes its just managed services, sometimes just a web service of some sort, sometimes its a development platform. I agree on your points regarding COGS.

  9. This article is complete trash and 100% conjecture. Don’t you have anything factual and interesting to write about?

    1. I may think the article is weak, but calling it trash under an “Anonymous” post is pretty slimey. Don’t YOU have anything factual or interesting to write that you could put your name against. Oh wait, you’d rather trash other people’s hard work without even having the balls to put your name against your trashy post. Weak.

  10. Jake Kaldenbaugh Saturday, July 18, 2009

    Regardless of whether or not you believe each of the assumptions in Allan’s analysis, the comments illustrate the immaturity of the market’s understanding of cloud & datacenter costs is (at least in terms of public discussion). Typically with market transformations, initial adoption is associated with a crude understanding of the ROI as the technology takes a front seat in serving new markets. As the enterprise begins to consider the cloud more thoroughly, we’ll see the discussion around costs and transformation ROIs improve dramatically. Building a wholistic datacenter profit/cost model will drive further adoption, especially in this environment. Thanks to Alan & Panorama for pushing this conversation forward.
    It’s also clear that Google is driving towards an ideal end-model: a distributed mesh of data centers that sits at elbow on the efficient frontier balancing compute consoldiation and latency minimization. As the technology (& cost structure) for each piece of the pie improves (compute consolidation, network bandwidth); that ideal point should fluctuate. It will be interesting to watch as cloud market players gain and lose leadership positions based on their strategic infrastructure decisions and ability to react to the technology shifts.

    1. It is true that we could have changed the assumptions to skew the gross profit higher or lower – for example if we assumed that the customers’ application used bandwidth at a steady state for each of the 20 days over the month instead of being idle half of each day.

      I also agree that we need to see a holistic datacenter profit/cost model that people can use to make decisions about outsourcing all or any of their infrastructure.

  11. I wouldn’t describe the bandwidth costs as hidden, just something that often gets over looked. And in many cases I suspect the case is that the bandwidth charges are being used to subsidize other services. Amazon can make the hardware related costs on EC2 lower by subsidizing the cost from the bandwidth fees. And no everything bottom lines directly to the bandwidth cost, it depends on your specific needs and abilities. That’s the conclusion I came to after my look at the cost of one terabyte per month:

    http://josephscott.org/archives/2009/01/how-much-does-one-terabyte-of-bandwidth-cost/

  12. I’m not entirely sure its fair to call this a “Hidden Cost”. It’s quite unhidden, just like the cost of CPU cycles and data storage. This is all about providing a service to enable the sale of commodity products…bandwidth being one of those commodities but certainly not the only one in this equation.

    @Jake – not sure if the comments are immature or if the article is a bit weak. There are some very constructive observations in these comments.

  13. One more point. The pricing for cloud services generally has to have a relatively high cost at all three of the main cost-points…bandwidth, cpu and storage. Every application has a different footprint. For example, a Box.net web site running on a cloud platform would be heavy in bandwidth and storage, but prob light on cpu. As a result, you can’t simply generalize at what point it becomes sensible to “buy your own bandwidth”. Furthermore, the complexities of operating your own environment may quickly outpace any cost benefits achieved. The point is, you can’t really paint this with a single brush. All the costs needs to be carefully examined before making a decision to jump onto a cloud environment.

    1. You’re spot on that every application has a different footprint and cloud computing customers need to really understand what their costs will be in the environment that they choose. I would contend that bandwidth intensive applications should consider using a hosted service where bandwidth could be procured for a much cheaper cost.

  14. Unfortunately, the math is wrong. You cannot take $30 per month an divide by 1.85Mb per second. You need to convert the months into seconds or vica versa because right now your number is off by a factor of about 2.6 millon, which is how many seconds are in a month. The real cost should be $0.000006 per Mb.

  15. Jake Kaldenbaugh Saturday, July 18, 2009

    @Roman – I didn’t say the comments are immature, I said the market’s understanding of cloud & datacenter costs are immature, where immature = newly forming, not an evaluation of their ability to interact with society. I agree that most of the comments are of very high value but still demonstrate a wide variation in terms of expectations and understanding. As one commenter pointed out, a couple of degree swing in some of the assumptions drastically changes the conclusion. That’s an issue that will affect market adoption. The more uncertainty that can be removed from the market’s perception about these issues, the faster real market adoption will be.

  16. Chris Albinson Saturday, July 18, 2009

    The discussion in the thread is fascinating and just goes to show that the economics are still very much in play for both the suppliers and the buyers. My guess is ultimately all the suppliers start to look like Las Vegas Casinos. Put simpily – when you spend billions of dollars building some you need to find a way to turn it into a big cash machine.

    Where you see the word “casino” replace with “cloud service”:
    (i) there will be lots of buzz marketing to get you to pay attention to their casino and then a “free” automatic walk way to take you into the casino
    (ii) they will find lots of ways for you to spend your money once you are in the casino – gambling, food, entertainment, etc. All designed to look enticing, but with the end effect of draining you of all your cash and then some.
    (iii) it will be very very inconvenient to leave, long lines for expensive taxis and no automatic walkway…..

    Caveat emptor

  17. Here are two articles on the same topic – from the service provider perspective. Apparently there is powerful profit model in cloud storage:

    Citrix and Amazon: Not the Best Deal for Service Providers
    http://cloudstoragestrategy.com/2009/05/citrix-and-amazon-not-the-best-deal-for-service-providers.html

    and

    Cloud Storage: The Profit Model for Cloud Computing
    http://cloudstoragestrategy.com/2009/05/post.html

    The argument being made is that IT hosting providers should not give up their share of the profits to Google, Amazon, or Microsoft Azure!

  18. David Robins Sunday, July 19, 2009

    My company (binfire.com)provides file hosting and collaboration toos. We have decided not to use S3 or Azure due to the bandwidth cost until prices come down. The cost of storage as everybody knows is coming down. It is the price of the bandwidth which is the primary factor. We have our own dedicated line and so far its has been cost effective for us.

  19. I had mentioned in 1 of the major flaws of this article that capacity planning was not taken into account for bandwidth. In light of the other comments I just want to explain what this is. Basically you have to prepare for more bandwidth than your customers use on average. So bigger circuits and commitments to upstream providers. And you don’t pay for the average usage @ $8/mbps. So in the example of the article, it was assumed someone was using 200gbytes/month for half a day of 20 days. That was estimated to be 1.85mbps, but that’s just an average. NOT what the upstream will bill at! Upstream providers bill peak usage (or close using top percentile billing). You could very easily be costing your provider double what your average usage is…

    Bottom line, I challenge anyone to show that cloud services charge more for bandwidth than non-cloud services. Sure Azure and Amazon maybe charge a little above average for bandwidth compared to a traditional dedicated server provider, but there are other cloud server providers like NewServers.com which charge a lot less (basically nothing)…

    As to the comments about the immaturity of the market, I also disagree. The market seems to be very well understood now and I can’t imagine why anyone would still choose traditional dedicated server and colocation models over cloud computing unless very special hardware was needed or other special circumstances.

  20. Virtual Web Symphony Sunday, July 19, 2009

    Clouds are still not popular and it’s going to take pretty long time before we really embark on our journey to cloud computing.

  21. Richard Donaldson Sunday, July 19, 2009

    @ Chris – great commentary: really like the Vegas analogy as I can see this in a variety of other areas, most specifically the entire datacenter/infrastructure mgmt platform solutions (a proxy for cloud adoption and mgmt imho) – many people can be identified having dropped into this rabbit hole and never coming out of it until they had sunk millions for something that doesn’t quite work as advertised…

    @ Scott – I am very curious to see why you think that this market is mature – the operational quandries and confusion blogged about near daily seems to indicate otherwise – no one argues that “cloud” is “hip”, but is it really mature?

    Lastly – the entire concept, imho, of the cloud revolves around a highly interconencted set of datacenters that are, from get-go, managed entirely as if the datacenter was a part of the computing platform, thereby allowing our IT best practices of centralized management to take root, providing the abiltiy to script & automate heretofore run-book processes that are manual (like technician going an writing down CRAC/UPS stats versus seeing on a dashboard) and allowing algorithms to come into play that shift data in and out of datacenters as say: a) seasonal electrical prices fluctuate, or b) impending weather looms (think Miami hurricane season), or c) you need to get your computing edge closer to the customer for latency requirements…all of which is possible, yet not in evidence in today’s offerings (hence, the lack of maturity). Clouds are the utility computing model come to reality (as Nic Carr talks about), but are still in the infancy stage of deployment and development – we also, which is something I talk about A LOT, are completely avoiding the sales/marketing aspect of this – the highly consultative sale that must be enabled is a part of the vendors “earning the trust” of the clients – outsourcing your companies sacred data is not to be taken lightly, it must be earned thru consistently good services that are always evolving for the best (think Myspace having lost the social wars here) to remain in the leadership position

  22. Interesting calculations, however you completely forgot about peering. If you had, you could have argued that Microsoft doesn’t even need to pay for all of its bandwidth as it can peer away a significant percentage, saving it and the receiving network significant amounts in transit fees

    1. You’re right – if they a cloud provider can do peering and save on transit fees then their bandwidth gross profit would increase.

  23. @Richard – the reason the cloud computing market (the kind of cloud computing mentioned in the article that Amazon, Azure and Rackspace are offering) is mature is because dedicated/virtual servers and storage have been around for a long time. Market leaders have it down to a science and the product has become a commodity. What is new between that old model and the new cloud computing model discussed here is on-demand scalability and hourly billing. While I believe those 2 features are extremely valuable and important, they don’t really turn everything upside down from an operational perspective. And if you’re choosing a hosting provider that doesn’t offer those cloud features… why?

    @Raindeer – you’re correct about peering. A company can save a significant amount of money peering. But like I and others have said above, that’s only 1 aspect of the cost of delivering bandwidth to servers or data. Bandwidth is an ultra competitive market and the big providers pay all the way down to $0 for bandwidth, but they still need to charge money because it does in fact cost them to manage it.

  24. William B. Norton Sunday, July 19, 2009

    Hmmm. The Business Case for Peering for Cloud Providers…. /Me envisioning a research white paper

  25. Dave Asprey Sunday, July 19, 2009

    Allan is right. Higher costs for bandwidth in the cloud vs dedicated hosters/colos/ ISPs will drive some apps away from the cloud. So will the higher latency introduced by the cloud networks (higher latency in-data center) and the lack of bandwidth shaping or advanced bandwidth controls that are necessary for high end at-scale apps. If you don’t believe that, try sending traffic between two Amazon EC2 zones and see if you can predict what throughput you’ll get. Or read my earlier GigaOm post titled “What Intel Can Teach Google About the Cloud.”

    A bizarre form of cloudbursting would make sense here, related to what content providers have been doing with CDNs for years. Content providers host their own pages but refer the high capacity ones to CDNs. If I was launching a new bandwidth intensive app today, I’d rely on the cloud for my user subscription and account admin features as much as I could in order to have quick scale and usage-based pricing, but I’d put the commodity storage intensive stuff in my own well-peered data center or on a cheaper service. Many apps can run partly in the cloud if the economics justify that kind of architecture.

  26. The Hidden Cost of the Cloud: Bandwidth Charges | Digital Asset Management Sunday, July 19, 2009

    [...] Continues @ http://gigaom.com [...]

  27. @Dave – it is a myth that there are “higher costs for bandwidth in the cloud vs dedicated hosters/colos/ISPs.” And I don’t understand the statement that higher latency is introduced by cloud networks… why would latency be higher in the cloud vs traditional hosting companies?

    I think you’re making the mistake of Amazon == all cloud companies.

  28. satish sharma Monday, July 20, 2009

    What’s up with GigaOm – is it going through amateur hour?

    Cost of $1.85 is for “raw bandwidth” that someone drops and your front door — you have to carry that into the servers – in and out. Provide redundancy in the network — in the building in your cloud on top of it.

    You have to pay people to manage these switches and routers. I don’t see the margins much different than your pizza stores or McDonald’s.

  29. Tech News » Toshiba dances on HD DVD’s grave, gets in bed with Blu-ray Monday, July 20, 2009

    [...] The Hidden Cost of the Cloud: Bandwidth Charges [...]

  30. GNC-2009-07-20 #495 Do you Like Math? | Everything about everything Monday, July 20, 2009

    [...] about Twitter. Real Password Issues. Data Says were going Mobile! T-Mobile Going Socials? What For? The Real cost of the Cloud! Time Warp and S3 Backups Kazaa goes Legit New Apollo 11 Images! AT&T Loosing Voice mail Can [...]

  31. GNC-2009-07-20 #495 Do you Like Math? | Everything about everything Monday, July 20, 2009

    [...] about Twitter. Real Password Issues. Data Says were going Mobile! T-Mobile Going Socials? What For? The Real cost of the Cloud! Time Warp and S3 Backups Kazaa goes Legit New Apollo 11 Images! AT&T Loosing Voice mail Can [...]

  32. Internet Marketing, Strategy & Technology Links – July 21, 2009 « Sazbean Tuesday, July 21, 2009

    [...] The Hidden Cost of the Cloud: Bandwidth Charges (GigaOM) [...]

  33. If this is a general discussion about the costs of cloud computing, it seems that no one is taking into consideration the folks who are actually using these services, not located in a data center but in offices around the planet where internet access is not NEARLY as inexpensive as $8/mbps. So, a company considering moving to cloud services must consider all the hosting and datacenter charges, PLUS the biggie, the increased bandwdith costs they will need to access their services from the office. You cannot buy office business class connectivity anywhere near the pricing of $15 / mbps that the cloud boys are quoted herein as charging, so I say a company would need to mulitply their bandwidth costs at more like 10-50 times that much. So, the fact is that there are even more hidden costs than discussed here for companies wanting to go this way. This will drive the market for cloud computing. There will be a sweet spot for these services but will be weighted at the smaller end businesses. large businesses will have the infrastructure to handle this in house AND have offsite (other business locations) for their own data centers. Hardware is cheap, and in house bandwidth is free. I’m not so sure this whole “cloud computing” thing will take off like everyone thinks. It is a good market, but in house servers are not going away by any stretch.

  34. Are Clouds Green ? | Paul Miller – The Cloud of Data Wednesday, September 23, 2009

    [...] The Hidden Cost of the Cloud: Bandwidth Charges (gigaom.com) [...]

  35. 1999-2009: How Broadband Changed Everything – GigaOM Wednesday, December 23, 2009

    [...] All have made for a broadband-enabled life. In the meantime, a new era of grid computing, known as cloud computing, has begun, courtesy of Jeff Bezos’s amazing house on the hill, [...]

  36. Daniel Berninger Wednesday, November 3, 2010

    The key obstacle in assessing the cost of a cloud offer remains the lack of an industry wide measure of cloud processing capacity. We have GB for memory, TB for storage, and GB for bandwidth.

    The Cloud Price Calculator (http://cloudpricecalculator.com) addresses this by adopting Amazon’s ECU as the compute metric at 1ECU = a 400 Passmark score.

    Combining all the resources and dividing by price yields the Cloud Price Normalization index and a ranking of cloud offers. Interestingly, the ranking shows Amazon’s newer instances provide more value than the older ones as Amazon has rarely reduced prices after introducing an instance.

Comments have been disabled for this post