Want to beat Amazon in the cloud? Here are 5 tips (hint: none are cost)


It’s been a little over seven years since AWS launched S3 and EC2. Back then it was exciting to spin up servers in just a few minutes without signing long-term contracts. But fast-forward to 2014: there are dozens of IaaS providers offering similar capabilities. The selling points — like self-service, zero CAPEX and elasticity — that once made the cloud look exciting are not as appealing anymore, and they are no longer the differentiating factors. In the current context, selling cloud for its self-service capabilities is similar to Microsoft trying to sell the latest version of Windows only for its graphical interface. Both have become commodities with the customers taking those features for granted.

So, how do new IaaS providers entering the market compete better with AWS, which is an undisputed market leader? Here are 5 tips to beat AWS on its own turf.

1. Keep it simple, stupid. AWS has become too complex to deal with. Not just for newbies but even for its existing customers. Customers have to choose from 36 services, 20 instance types, 6 instance families, 2 generations of instances, 3 types of billing models and 2 types of block storage options! Accurately predicting the first month bill is no less challenging. You have to guess the number of IO operations, intra-region data transfer, inter-region data transfer, and the data that will pass through the Elastic Load Balancer (ELB) before you barely arrive at the monthly outflow. The AWS simple monthly calculator is not exactly simple. If you want to compete with AWS, make your offerings truly simple. Don’t overwhelm customers with too many choices. Make it easy to launch their first server and make it much easier to predict the cost.

2. Focus on depth, not breadth. While Amazon takes pride in adding new features and services at a rapid pace, most of the AWS customers don’t go beyond EC2, EBS, S3, ELB and RDS. Many existing AWS customers do not like the artificial walls between its regions and cloud services. As Amazon keeps investing in new data centers to expand its global footprint, the disparity and the gap between each region keeps widening. Customers notice that not every service is available in every AWS region. Migrating workloads and data sets from one AWS region to another is not straightforward, which makes the customers wonder if they are still dealing with one cloud service provider. For customers with existing deployments within classic EC2, it’s extremely complex to migrate to VPC. Though EC2 and RDS use S3 for storing the backups and snapshots, they cannot be shared across these services. Migrating a database running in EC2 to RDS is no different from migrating from any other external environment. As a cloud service provider, if you get your compute, storage, networking and database offerings right, you stand a good chance of attracting customers. Delivering a reliable and integrated stack will increase the chances of customer adoption.

3. Performance, performance and performance. Make performance the key differentiating factor of your cloud offering. Deliver more cores per server, better throughput from block storage and lower network latency. Build your stack powered by SSDs, InfiniBand networks and high performance CPUs. Match the performance of provisioned IOPS without the complex guess work. Serious customers will pay for performance as long as you deliver more bang for the buck.

4. Support scale-up and shared storage. AWS’s philosophy of throwing more VMs at an application is not ideal in many scenarios. It might work wonders for marketing websites and gaming applications but not for enterprise workloads. Not every customer use case is designed to run on a fleet of servers in a scale-out mode. Provide a mechanism to add additional cores to the CPU, more RAM and storage to the VM involving minimal downtime. The other feature that’s been on the wish list of AWS customers for a long time is shared storage. It’s painful to setup a DB cluster with automatic failover without shared storage. Even basic workloads like CMS demand shared storage across VMs. This is a problem that many customers are waiting to be solved. Figure out a way to support shared block storage with concurrent read/write, and you have an instant winner.

5. Make it easy for managed service providers. Managed Service Providers (MSP) play a key role in the cloud ecosystem as they manage complex customer deployments round the clock. Winning their confidence is important to positively influence customer decisions. MSPs handling the customer workloads on AWS take a spaghetti approach to managing the cloud by interweaving custom scripts, CloudWatch alerts and third-party tools to monitor even basic metrics. They are forced to invest in expensive platforms even to achieve basic SLAs. Integrating comprehensive monitoring environment that is extensible and customizable will add a lot of value to the MSPs. By offering a better dashboard and integrated metrics covering common workloads like web servers, app servers and databases will appeal to both customers and MSPs.

Cloud computing hasn’t entered the prime time yet, and for many customers it’s still day one. Whether it’s deep-pocketed companies like Microsoft and Google or smaller, nimble players like ProfitBricks and Digital Ocean, there is a clear opportunity to take the public cloud to the next level. By taking advantage of these opportunities, one or more of those players can leap ahead in the competition for mindshare.

Janakiram MSV is the Principal Analyst at Janakiram & Associates. He can be reached at [email protected] or followed on Twitter: @janakiramm



I believe a consumer should have all these features without being tied to a single provider. Relying on a cloud provider to making things simpler may not make it easier for consumers to move to a different cloud provider quite quickly. Consumers should have the flexibility to switch to a provider who can best address their needs at any time. It is not necessary to be stuck to a provider forever when someone comes along with a better offering. Since there is no open standard for implementing these features, consumers should look at platforms like cloudmunch which makes connecting to any provider and exposing the various features of the providers without the consumers having to learn the APIs or intricacies of the implementations. Coupled with a flexible and simple orchestration and workflow layers, having any of the features (probably except the costing simplification point :) ) on any cloud or on premise appears to be possible.

William Toll

Hi David, Jim, Daniel, Anthony and Scott. I am employed by ProfitBricks – mentioned in this article. I think Janakiram hit on many of the points that our founders focused on when they architected and developed our IaaS Cloud. Without question our selection of InfiniBand over Ethernet as our networking interconnect has enabled us to innovate and deliver a cloud that performs better at a better price. Second generation technologies always disrupt the first generation and this is a dynamic industry. We’re still in the beginning of the Cloud transformation – it’s going to be interesting to see how the technology evolves here at ProfitBricks and beyond.

scott herson

+ #6 – Hybrid Compatibility. Applications need to be portable between on-prem private clouds in colo’s/data centers and off-prem public cloud. Management platform should be easy and consistent. Networking should be secure and robust.

Daniel Marzini

I think that #1 will be filled shortly by PaaS players and Simple Cloud Players (aka Digital Ocean).

Regarding #4, shared storage for VMs in the cloud will be one of the next big things for the cloud. IMHO Dropbox for servers (easy on, easy go) would be huge improvement.

Jim Haughwout

I would add consistency. Get rid of the “noisy neighbor” problem that plagues consistency of performance.

David Mytton

I think #1 and #3 are perhaps where we’ll likely see the most potential for engineering innovation.

For #3 we’ve been able to take advantage of Softlayer’s free networking both within each data center and across their worldwide network – private networking transfer is completely free and it’s only when you start looking at alternative providers who charge that you realise this can be a significant cost!

For #3 as an example Google Compute Engine’s persistent storage makes scaling incredibly easily because the number of IOPS you can get increases linearly with volume size, and it’s a consistent price that doesn’t charge for i/o – it’s all a single rate. All you need to do is figure out the number of IOPS you need, and the storage even lets you burst if necessary.

It’s going to be difficult to compete with Amazon on these kinds of engineering innovations but it is possible, and I’m looking forward to see what kind of things Google announce on 25th March. For me, they’re the most likely to show their engineering talent.

Comments are closed.