6 Comments

Summary:

Amazon Web Services announced on Tuesday afternoon that its Simple Storage Service (S3) now houses more than 449 billion objects. The rapid pace of S3′s growth is a microcosm of both AWS’ overall business as well as cloud computing in general.

s3_objects_2011_q2

Amazon Web Services announced on Tuesday afternoon that its Simple Storage Service (S3) now houses more than 449 billion objects. The rapid pace of S3′s growth is a microcosm of both AWS’ overall business as well as cloud computing in general.

At Structure 2011 last month, Amazon CTO Werner Vogels told the crowd that S3 was storing 339 billion objects. At this same time last year, the service was only storing 262 billion objects. One might also draw a parallel to the ever-growing cloud revenues at Rackspace, the incredible amount of computing capacity AWS adds every day or the mass proliferation of new Software-as-a-Service offerings.

Long story short: cloud-computing usage is growing overall — at about 100 percent a year in the case of S3 — and AWS looks to be steering the ship for the time being.

  1. Agreed that this is phenomenal, and is a good indicator of the approaching “tipping point” for cloud-based services. What I find intriguing is the question of how you would manage 449B objects… that has to be the mother of all database challenges.

    Dan

    Share
  2. AffirmedSystems Wednesday, July 20, 2011

    we saw the tipping point a few months ago with a 3 day outage… Many customers wont wait until they “see” the next one – they will have left the service.

    Share
    1. I laugh pretty hard at those who complain about amazons outage. Corporate it has outages all the time, including planned down time. U just don’t see press coverGe about it. Knowing what I do about the competency and technology, one would seem much better served by amazons cloud than even some of the very best corporate IT departments and data centers.

      Share
      1. Get all the information before making your mind. Reddit and others had huge impacts with that outage. I met with them and others at Interop and they were not happy. It is sure that internal IT outage will get any news coverage has it is your business. But when you pay for a service it is another ball game. Do you know that there is not one cloud provider that mets their SLAs ! the best is Amazon with 93.2 % this is 24 days of outage per year. I don’t think it is very good for an external service….

        Share
      2. Hi Mark,

        Get all the informations before making up your mind about outage. Do you know that there is not one cloud provider that meets their SLAs ? The best one is Amazon with 93.2% this is 24 days of downtime per year. I don’t think it is very good for that kind of service. I spoke with some of their customer at Interop this year and they weren’t too happy about the outage. Worst than that is the security of your data at these providers. Do you know that much of them use the same key for encryption of all the disks of all the servers ? So with your key you can uncrypt the data of your neighbours. Oh and by the way yes you can see your neighbours because many attacks have been done thru ring 0 of the virtualization layer of the server today and are well documented on the web….

        Share
  3. There is no question that AWS’s numbers are a great indicator of the cloud’s capability of handling vast quantities of dev/test work, as well as the adoption of cloud computing as a whole.

    Now enterprise is taking notice: can the cloud be used for production apps and services, as well as lab/dev? Is a true business-grade cloud ready with the reliability, performance, and security needed to run mission-critical applications?

    We’ve put together 2 checklists you may find useful: one for ensuring your cloud provider is enterprise-ready, and one for migrating your apps and data once you’ve found the right partner.

    Check them out here: http://bit.ly/CloudChecklists

    Haidn Foster, Tier 3

    Share

Comments have been disabled for this post