6 Comments

Summary:

Amazon dedicated instances take some of the “shared” out of shared cloud infrastructure. Depending on your point of view, that’s a good or not so good thing.

AWS Summit
photo: Barb Darrow

Dedicated cloud instances are something of an oxymoron. The traditional understanding of cloud computing is that it runs stuff from many users on shared infrastructure. Dedicated instances, by definition, aren’t shared and thus could appeal to a class of users who worry about the drag that neighboring workloads can have on their jobs — the so-called “noisy neighbor” problem. They also may suit those who see shared infrastructure as not compliant to various industry regulations.

There are enough companies worried about such things that Amazon Web Services launched dedicated instances in March 2011 and significantly cut prices on them in July. Since that price cut — which amounted to nearly 80 percent in some cases — the use of AWS dedicated instances has risen significantly, according to Cloudyn, which monitors AWS and Google cloud usage for customers.

In a blog post, Cloudyn VP of marketing Eron Ambramson wrote that before July 2013, dedicated instances were hardly used at all. But now, 9 months after price cuts, 0.5 percent of the instances it monitors are dedicated. (Cloudyn said it has eyes on 8 percent of total AWS workloads.)

Percentage of workloads running on AWS dedicated instances by region.

Regional breakdown of Cloudyn’s AWS customers.

The price differential now between dedicated and regular on-demand instances is about 10 percent, although dedicated instances also incur additional run-time charge of $2 per hour per region — which for big companies is “outweighed by the advantages and peace of mind of having your own dedicated hardware,” Abramson wrote.

Server Density CEO David Mytton said AWS needs to offer dedicated resources since rivals offer very fast bare-metal capabilities, which he argues can also be cheaper than shared cloud infrastructure in some use cases, a topic that Mytton and others will address at Structure in June.

“Everyone knows the noisy neighbor problem and AWS doesn’t have a great historical reputation for performance on that front. It’s one reason why Softlayer has an advantage with their fast deployment of bare metal that works alongside their cloud, so you can easily move workloads around,” he said via email. SoftLayer, bought by IBM last year, is now the core of IBM’s cloud computing story.

Dedicated instances? meh.

Others don’t see dedicated instances as a real advantage.

For one thing, performance drag comes more from virtualization itself rather than noisy neighbors, said Joe Emison, CTO of BuildFax, an avid cloud user. “And, noisy neighbors do more damage with respect to network traffic than anything else.” If that is the case, Amazon’s IOPS-optimized Elastic Block Store and scaling options in S3 storage and CloudFront content delivery network address those issues.

“Buying dedicated for performance is a bit like buying premium gasoline to make your Honda Accord perform better — it probably does but shouldn’t you be doing something different if you want to see significant results?” Emison said.

He also discounted the compliance argument. It’s true that some auditors don’t sign off on the use of multi-tenant environments, but those auditors will also “not be OK with other things AWS does even with dedicated instances,” he said. For example, many companies want to be able to tour data center facilities, something that Amazon does not allow, although it does provide a list of third-party certifications and how they are attained.

This story was updated at 8:27 a.m. April 23 to reflect that Amazon does document security certifications for its data center sites.

 

  1. Cloud Insider Tuesday, April 22, 2014

    Dedicated instances usage is a tiny % of overall instance in EC2. Maybe it is a significant portion of Cloudyn’s user base (and not sure how big their user base is). But overall it is a very small %. Don’t get carried away by the numbers above, they are not representative of EC2 overall. Part of the issue of having small vendors like Cloudyn or Rightscale make generalizations about AWS based on the extremely small % of AWS that they have visibility into.

    Share
  2. @Cloud Insider Thank you for your comment.

    Cloudyn currently monitors over 8% of AWS workloads thus are able to present our findings accurately from a large source of data consisting of EC2, storage and more. You are welcome to contact us directly, and we would be happy to provide further insight of how these numbers and insights are derived.

    Share
    1. “Cloudyn currently monitors over 8% of AWS workloads” – no you don’t. First off to know the percentage you would need to have access to the AWS backplane, which is not open to external parties. And if you were really monitoring that large a %, you would need massive infrastructure, far more than what a company of your size has.

      That also probably explains why your numbers are so far off from reality. I wish I could give out the real number, but I think that is confidential information, sorry.

      I don’t mean to be harsh, sorry if I am coming across that way. I think you guys have a good product. Just pointing out though that your 8% number and the dedicated instance numbers are way off.

      Share
  3. NephoScale is small IaaS provider doing some interesting things in the area of data center automation. In addition to providing traditional on-demand multi-tenant virtual servers, it provides its customers with on-demand dedicated bare metal servers with no hypervisor loaded on them. Meaning, the customer gets a “true” bare metal server “truly on-demand”, all on the same low latency private broadcast domain with their virtual servers. There are obvious performance and security advantages that come along with this capability, such as having no hypervisor overhead, no network port contention, no bus contention, and the security of a single tenant environment. When it comes to scaling databases it is hard to get better value than with bare metal servers, as they usually save money, simplify server mgmt, and increase performance. Although, the things that not many people are talking about yet are the added possibilities that on-demand API controllable bare metal server provisioning can bring to the table. Such as, cloud-in-in-a cloud, on-demand hosted private cloud, and hybrid cloud capabilities. With a bare metal server users can load anything they choose, and that includes creating on-demand and auto-scaled VMware, OpenStack, Eucalyptus, Cloud Foundry, etc, clusters – and in instances where it is needed users can run any hypervisor of their choice. The keys to unlocking the true potential of bare metal servers are that they must be truly on-demand, API programmable, able to use the same server images as the virtual servers, and must be able to be merged into any one of a users private networks or broadcast domains (multi-tenant virtual servers must always by default reside on a separate network from bare metal servers). If these capabilities are available then, especially with advanced single click template orchestration technology, the list goes on as to what different things can be done by adding true on-demand bare metal provisioning into a data center environment. But, so far it has not been easy for providers to offer this bare metal capability due to several reasons surrounding programmatic control, provisioning, inventory mgmt, image usage, and networking. Bare metal servers are in the end is just another element within the data center that can be effectively managed through scripts, an API, or a UI. Only having access to 100% virtualized environments is not enough for many organizations, for many reasons. The use cases where this applies are numerous, but not talked about a lot yet within the current cloud user base. But, then again the current cloud user base is still a tiny fraction of the total IT spend worldwide, so the conversation will likely change as new use cases come “to the cloud”.

    Share
  4. Whether in the Cloud or on-premise, all too often, the fix to poorly performing applications is to throw more hardware/horsepower at it. I see this as just another excuse to write poor code.

    Share
  5. There are always compromises, and writing “perfect” code does not always provide the best cost benefit ratio either. The reality is that there are often benefits from working on both getting the most bang for your buck on the hardware performance, and also writing the cleanest most efficient code you can. It would not make sense for someone to pay more for less performance, no matter what their code looks like. Business managers need to make sure the company is getting the best return on its investment and are speeding time to market to capture time sensitive opportunities. NephoScale is doing its part to provide more performance at less cost, and our customers work on writing better code. It’s a team effort.

    Share

Comments have been disabled for this post