3 Comments

Summary:

RBC Capital Markets wants to help. Its new research compares the positioning and price/performance of many cloud options. Some of the findings confirm what we know, others are a tad surprising.

comparison shopping
photo: Thinkstock

Comparing the price and performance of various clouds is like trying to nail Jello to the wall — this is a very fluid and messy market, with lots of players, lots of options, lots of price changes. Not that that keeps folks from trying — Cloud Spectator came out with its take last week and RBC Capital Markets came out with its own comprehensive analysis on Sunday.

Much of RBC’s report confirms a lot of what we think we know:  Clouds that offer little customization or support tend to be cheaper than those with more enterprise-y hand holding and other niceties that CIOs have come to expect. It doesn’t help that the lines blur all the time — Amazon Web Services for example, adding more enterprise type support options while HP, Microsoft, VMware and others push their respective enterprise clouds as massively scalable alternatives to AWS.

RBC Capital Markets chart

The research also confirms that in cloud, as in life, price isn’t everything, even though people tend to focus on perceived cost savings from cloud over traditional IT deployment.  According to the report — by RBC analyst Jonathan Atkin and colleagues —  price/volume elasticity often leads to “minimal scale benefit — except for storage.”

“For memory and CPU, we found mostly flat pricing in relation to volume. At some vendors, we found negative scale effects, suggesting that some customers are forced to overbundle services as requirements increase.”

What this means is that all those price cuts we keep hearing about don’t necessarily amount to a hill of beans since cloud spending encompasses so many components — bandwidth, storage, I/O. A price slice in one area may not do much overall.

And, RBC found that scaling up a workload did not necessarily decrease unit pricing of a single resource, as might be expected. That does tend to happen with object storage, where typically the cost per GB of RAM falls as the workload grows. But the price curve for other resources can be flat or even rise as the workload grows.

 “… to our surprise, many of the non-storage pricing curves are flat, meaning that unit costs (e.g., for memory or CPU) do not decline as configurations increase in size. Many vendors fall into this category, including HP, Rackspace, Joyent, Google, CloudSigma, and Amazon. Even more surprising is the upward sloping price curve we encounter in some cases, e.g., with IBM/Softlayer, where unit costs increase for local storage and CPU for larger workloads.”

RBC said this happens when vendors “overbundle” or offer the customer a bigger overall configuration to handle the increased load. That would be fine and cost-efficient if all those resources get used but, in reality, the customer often ends up using just a part of all that extra capacity.

rbc chart 3

You’re subscribed! If you like, you can update your settings

  1. Hi Barb,

    I think “overbundle” may be my new favorite term. Kudos to the RBC folks for coming up with a word that so precisely describes the problem of cookie-cutter sized instances. I’m try to get my hands on the actual report to see if there’s more, but that last diagram doesn’t go quite far enough as it employs this t-shirt sized thinking.

    While I love the idea of showing hourly equivalent costs per vCPU, what does “small”, “medium” and “large” mean here? Same for “Standard” and “High CPU”.

    Someone coming from on prem who is interested in the cloud understands a need like “2 CPUs, 10 GB RAM, and 100 GB HDD” but not “small standard”. Perhaps the full report goes into this detail, but I’d much rather see a direct comparison in units of measure that the masses understand instead of just the cloud cool kids.

    Pete Johnson
    Cloud Platform Evangelist
    ProfitBricks

  2. there’s a lot more detail in the report Pete. I just picked one salient point….

  3. Robert Jenkins Tuesday, October 29, 2013

    Thanks for the clear outline and highlighting some of the aspects that are becoming more and more well understood for IaaS with the implications for customers!

    You mention that ” to our surprise, many of the non-storage pricing curves are flat” and list CloudSigma as one of them with flat pricing however I wanted to point out that this isn’t the case! We don’t have bundled resources so there are no server definitions as such we couldn’t offer a per VM discount level which is nonsensical in a utility computing environment where resources are sold irrespective of VM size. We do offer volume discounts by resource based on aggregate account consumption. This allows customers to buy resources unbundled and be priced competitively as well.

    This makes sense, ultimately we are talking about VIRTUAL machines here; who cares if you have 100 small ones or 10 big ones? The total scale in terms of resource consumption is what matters. That’s why we offer up to 42.5% volume discounts on resources.

    Doing this ensures that public cloud can compete with in-house solutions as customers scale. This is combined with time discounting up to 45%. So if a customer buys the equivalent for example of five racks of computing, they’d qualify for the full volume discounts and when comparing with dedicated you’d take a two year or three year purchase period (which would be the hardware lifetime). The result in CloudSigma would be a total discount of just over 68% off regular one month subscription prices. At those levels we genuinely compete with dedicated in-house solutions.

Comments have been disabled for this post