6 Comments

Summary:

Amazon Web Services has introduced its latest instance — an 88-core, 240 GB SSD, 244 GB RAM and 10 GbE behemoth designed for real-time analytics with software like SAP HANA, as well as demanding scientific workloads.

Latching onto the trend toward in-memory storage for real-time computing, Amazon Web Services has added a new type of virtual server. The new option — the 10th such available on the EC2 offering — is called the High-Memory Cluster Instance and includes 88 EC2 Compute Units of compute capacity (running on two Intel Xeon E5-2670 processors, two 120 GB solid-state drives of instance storage and 244 GB of RAM.

It’s designed with speed in mind for uses such as in-memory analytics (including on SAP’s popular HANA platform) and certain scientific workloads that require data delivery to keep up with processing speed. The faster applications can read and write data — and doing so from an in-memory cache or solid-state drives is much faster than doing so from hard drives — the sooner that processors can compute it.

And because the new instance is part of AWS’s Cluster Compute family, multiple instances are connected via a 10 GbE network for speedy server-to-server data transfer. In benchmark tests from a site called CloudHarmony in 2010, Cluster Compute instances far outperformed anything else on the market (GigaOM Pro subscription req’d) at the time. They’ve also been used to spin up clusters that can compete with traditional supercomputers in terms of sheer performance — reaching No. 102 on the lastest Top500 list with a peak speed of 354.1 teraflops.

Although, it should be noted, AWS isn’t the only game in town for users wanting this type of beefy core in order to handle their real-time data processing needs. Liquid Web’s Storm cloud service, for example, offers some high-memory, SSD-powered servers of its own at nearly $1.50 per hour less than what AWS charges (albeit with fewer cores and absent the 10 GbE backbone and list of features that comes along with the AWS platform).

Whatever the cloud, though, ever-higher-performing instances mean new classes of workloads and more business for cloud providers that offer them. Especially as big data and analytics applications pick up steam and move from batch to real-time, clouds that can handle demanding users are in a good position.

Feature image courtesy of Shutterstock user ssguy.

  1. Jeff Schneider Tuesday, January 22, 2013

    “Amazon Web Services has introduced its latest instance — an 88-core”; not to nit-pick but it’s not 88 core. If I remember correctly, it’s 8 core per CPU; with 2 cpu’s and hyperthreading you can call it 16 or 32 virtual.

    Share
  2. Big data and cloud computing are playing key role in increasing efficiency of operations. Secure your spot for the Big Data event by Register on http://globalbigdataconference.com/registration.php; Get 20% Offer using the discount code TWITTER

    Share
  3. The speed at which AWS can deploy a new instance for their users has to be admired. If operations efficiency can be defined in four areas: Quality, Cost, Delivery and Flexibility, cloud competitors of AWS, can only hope to compete against amazon in Cost and Flexibility.

    @theoh
    http://providingcloudyservice.com/

    Share
  4. The cost of these instances is significant so I can imagine the use cases are for really transient workloads. You can legitimately run smaller infrastructures on AWS, ignoring the benefits of the cloud (i.e. flexibility) vs buying your own hardware but once you hit these prices then the only reason to be using AWS is for the cloud specific features…otherwise you’d be much better off buying your own kit.

    Share
  5. Interesting article – I personally love this cloud technology. Gartner Research predicts that the total outlay for cloud computing services could nearly double, to $207 billion, by 2016. Install-and-upgrade software and infrastructure is becoming less popular by the day, it appears.

    Here’s another interesting article that looks at what were the key developments and challenges for enterprise cloud computing in 2012

    http://www.dincloud.com/blog/cloud-computing-industry-perspectives-2012

    Share
  6. Seeing a division of labor forming in the Cloud is excellent proof that the concept is maturing and segmenting to solve more than just elastic storage problems.

    The more conversations I have with customer, on airplanes and in online forums, the more Cloud is clearly taking hold as the platform going forward.

    I wrote about it here: http://wp.me/p1pL4e-37o

    I’ll also be hosting the Big Data track at Cloud Connect in Santa Clara from April 2nd through 5th and maybe I’ll see you there?

    Share

Comments have been disabled for this post