Blog Post

The Cloud Performance Dashboard: A Quick Market Overview

Updated: As talk of cloud adoption by enterprises becomes more commonplace, focus is shifting from justifying the cloud to identifying best practices and the highest performing cloud providers. One area with significant activity is performance monitoring as Geva Perry points out on his blog. A report published at the end of last year by IDC found that respondents ranked performance behind only security and availability as the biggest challenges and issues for cloud adoption.

In order to give buyers visibility into the relative performance levels of different cloud providers, a number of groups have developed tools to measure and compare performance under different scenarios. While individual vendors have begun to provide their own monitoring dashboards (Salesforce (s crm) has its Trust Dashboard while Amazon (s amzn) has its CloudWatch dashboard, for example), buyers are increasingly looking for third-party tools that give a neutral insight into vendor performance.

Some interesting players in the space include:


Supporting Rackspace (s RAX), Amazon EC2, Linode, GoGrid, Slicehost, RimuHosting, and VPS.NET, CloudKick enables users to control cloud infrastructure from multiple different vendors from one dashboard which allows both monitoring and management of an infrastructure setup.

Built upon the open-source Libcloud API that CloudKick also helped develop under the guidance of the Apache Software Foundation Incubator, CloudKick’s commercial services range from $99 to $599/month depending on the number of servers. Cloudkick also offers customized packages for customers with larger or more specific needs.


Still in beta, CloudSleuth is a cloud performance visualization tool that runs a benchmark Java e-commerce application to measure response time across various cloud providers.It deploys the identical application to different cloud providers and then measures performance from locations across the U.S. and internationally. CloudSleuth returns metrics about both response time and availability.


CloudHarmony is another beta product that provides what it calls a Cloud Speedtest to provide a benchmark performance figure for various cloud providers.


Cloudstone is a tool that spun out of a research project at UC Berkeley. More than a benchmarking service itself, it has created an open-source framework for testing cloud performance. The research team built a selection of tools for generating various load levels and testing the performance under those loads across various cloud providers.


More than a benchmarking tool, ServerDensity is a product that offers the ability to measure CPU load, memory, processes, disk usage, network traffic. Using a generic application or workload bundle, however, one could use ServerDensity’s reporting engine to run comparative tests across multiple cloud providers.

Cloud CMP

CloudCMP was developed jointly by Duke University and Microsoft (s msft) Research. The Cloud CMP project developed a number of measurement areas and assessed different cloud vendors against those tests. In doing so, it has produced both empirical bottom line performance results as well as an interesting cost/performance measure for different applications deployed across different providers. So far, the research team has produced a single report covering several cloud providers.


CloudStatus is a VMware (s vmw) company that aims to “provide an independent view of the health and performance of the most popular cloud services on the web.” Currently in beta, CloudStatus thus far measures only Amazon Web Services and Google App Engine (s goog) for both availability and performance.


BitCurrent performed a benchmarking study in late June. The study measured Amazon, Rackspace and Terremark’s (s tmrk) IaaS offerings and and Google App Engine at he platform level. As a one-time only study, it’s primarily of academic interest, as cloud performance is a changing area and near real time status is the only real way to monitor performance on an ongoing basis.

Doubt remains as to how cloud performance offerings will fulfill the dual needs of remaining independent to ensure neutrality while still building a viable and profitable business. The existence of several University-created tools in this list indicates that it may well be from the research perspective that most of the independent performance metrics come.

Ben Kepes is an independent consultant and contributing writer for GigaOM. Please see his disclosure statement in his bio.

Related GigaOM Pro content (sub req’d): Infrastructure Overview, Q2 2010

4 Responses to “The Cloud Performance Dashboard: A Quick Market Overview”

  1. jeffreyabbott

    I also commented on Geva Perry’s blog about this. I’m glad that IDC acknowledged that performance is only one of many possible metrics. Their survey indicated that performance ranks behind security and availability. This is helpful for the average person, if there is an average person. But as we know, the needs of each person will vary. People will naturally place different weights on different categories. The Service Measurement Index (SMI), which is hosted on, provides compiled user-submitted ratings of cloud services across 6 key metrics: quality, agility, risk, cost, capability, and security. Users can rate, review, and compare cloud services of the same type. So for example, some people will place more, or less, importance on security than other factors. For this reason, SMI is enabling users to personalize the “weighting” of each of the six metrics, so that the cloud service analyses are presented accordance with the users’ requirements.

    Carnegie Mellon University (CMU) and CA Technologies teamed up to start the initiative, and CMU is leading the charge going forward with a growing consortium of members, including commercial and governmental end users, technology vendors, and educational institutions. I have a blog about it here:

  2. I assisted with the BitCurrent study, and one thing that was clear was the difficulty in objectively comparing IaaS and PaaS, and even one PaaS to another. Measurement and benchmarks are useful, but you have to include a number of caveats and assumptions when comparing different clouds. This isn’t a bad thing, as different clouds are good at different things, but doing an apples-to-apples comparison is nearly impossible. For example, limits the number of operations you can run per day (limiting complex benchmarking), and EC2 has numerous configurations/tweaks you can make to optimize certain types of performance.

    Also, the health dashboards mentioned at the start of the post only report on the basic status of the service, and provide little insight into the real-time performance. Google App Engine is one of the only services that includes real-time performance data about itself:

    A few weeks ago I put some thought into why measurement and transparency are so important to us as human beings: Beyond the simple fact that we need to make business decisions, a lot of this comes down to basic human needs.

  3. Although we are a layer up, Makara offers granular performance monitoring for applications running in the cloud. Often times there is very little about the underlying IaaS that you can “tune” or “optimize”…so an option (besides switching IaaS vendors) is to identify tuning opportunities amongst the the software components and application code running on the cloud. This is where Makara can help.