10 Comments

Summary:

In an ongoing effort to improve its suite of web services, Amazon said today that it’s adding persistent storage features to its EC2 storage service. Why is this important? As the AWS blog explains, up until now you were able to attach 160 GB to 1.7 […]

In an ongoing effort to improve its suite of web services, Amazon said today that it’s adding persistent storage features to its EC2 storage service. Why is this important?

As the AWS blog explains, up until now you were able to attach 160 GB to 1.7 TB of storage to an EC2 “instance.” (An “instance” is essentially the server.) As long as the server was running, the storage remained available. Once you shut it down, the storage disappeared. “Applications with a need for persistent storage could store data in Amazon S3 or in Amazon SimpleDB, but they couldn’t readily access either one as if it was an actual file system,” the blog says.

Amazon CTO Werner Vogels, a keynote speaker at our Structure 08 conference, on his blog describes persistent storage this way: “It basically looks like an unformatted hard disk. Once you have the volume mounted for the first time you can format it with any file system you want or if you have advanced applications such as high-end database engines, you could use it directly.”

In other words, this new persistent storage essentially acts like an external hard drive “attached” to your “instance.” It can also be plugged into more than one “instance,” thus making it a shared drive. (I misreported the deleted bit. Error is regretted.) We are a little intrigued by how Amazon is making this happen. Some experts believe that it might be via using iSCSI. But persistent iSCSI at such large scale is expensive. (If anyone has a better explanation, please let me know.)

What it all means is that AWS/EC2 has gone up a few notches in terms of reliability. This reliability will go a long way towards the company offering service-level agreements to customers, especially large enterprises that want to utilize Amazon’s on-demand infrastructure. Alistair Croll earlier this month wrote a post in which he argued that Amazon was going after larger corporations, and today’s announcement bolsters his theory.

  1. I believe that the volumes can not be shared concurrently. While they can be disconnected and then reconnected to a separate instance, they can not be connected to two instances as the same time. Therefore it’s not a “shared drive” but a “portable drive”.

    Share
  2. While the cloud storage options such as this can only try to ‘simulate’ true storage, they never can achieve real storage feel that you get with your own dedicated cloud storage. The limitation discussed in this post is just one of the limitations of S3 and EC2. Author needs to discuss this with real users of S3 and EC2 to get a real feel.

    For real storage and backup options, try IBackup.

    Share
  3. The storage volumes can only be attached to a single instance at a time for now. However, using the snapshots, one can replicate the data extremely rapidly and mount the copies on many, many instances. No, that’s not read-write sharing across many servers, but doing that is tricky even with all the hardware in the world…

    MichaelMD: we’ve been actually using EC2 and S3 in production for a year and a half now and there is no way I’d go back to a traditional colo or hosting service. Having tested the new storage volumes I have to say that they’ve implemented the important features of SANs. Not all, but the ones that really make a difference and allow us and our customers to build more reliable and more scalable deployments in AWS. If you have your own SAN you might have more features, but you won’t have even close the flexibility of what Amazon offers, and you’d be out a lot more change!

    Thorsten – CTO RightScale – http://blog.rightscale.com

    Share
  4. @ Thorsten and other,

    Sorry for the multiple instance error. I might have let the imagination get the better of me. Fixing the problem right now.

    Share
  5. [...] providers.  Or in Valleyspeak: “this is a validation of the Web OS.”  Amazon.com is right there too, as may be others, but Amazon.com also has a much more diversified revenue base as of [...]

    Share
  6. [...] Elastic Compute Cloud and Simple Queue Services products. Amazon, with its launch last week of persistent storage, was clearly wooing enterprise users, and the offer to provide support signals a formal [...]

    Share
  7. [...] GigaOm notes that Amazon has increased its web services and storage features, which could positively impact its reliability and decrease downtime. [...]

    Share
  8. [...] on Amazon’s cloud offerings, but CEO Michael Crandell expects the recent additions of persistent storage and some basic support will only help RightScale’s business, as it makes it more [...]

    Share
  9. [...] that can be used in tandem with applications using the Amazon Elastic Compute Cloud (Amazon EC2). Amazon first started talking about this back in April, sharing some details about the service. (More about their road [...]

    Share
  10. [...] “Virtualization and the amount of data has been the biggest thing in the last 18-24 months for storage, but the next thing is a move toward cloud storage,” says Gretsch. “Cloud computing will require cloud storage, and [Amazon's] S3 has some limits to it.” [...]

    Share

Comments have been disabled for this post