Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
Updated. Amazon Web Services (s amzn) today upped its high-performance computing portfolio by offering servers that will run graphics processors. The move comes on the heels of AWS releasing its Cluster Compute Instances in July, and validates the idea that specialized hardware may be better-suited for certain types of computing in the cloud.
Sharing the same low-latency 10 GbE infrastructure as the original Cluster Compute Instances, GPU Instances include two Nvidia (s nvda) Tesla M2050 graphical processing units apiece to give users an ideal platform for performing graphically intensive or massively parallel workloads. AWS isn’t the first cloud provider to incorporate GPUs, but it’s certainly the most important one to do so. Furthermore, the advent of GPU Instances is one more sign that HPC need not be solely the domain of expensive on-premise clusters.
As I pointed out back in May, even Argonne National Laboratory’s Ian Foster, widely credited as a father of grid computing, has noted the relatively comparable performance between Amazon EC2 resources and supercomputers for certain workloads. Indeed, even with supercomputing resources having been available “on demand,” for quite a while, industries like pharmaceuticals – and even space exploration – have latched onto Amazon EC2 for access to cheap resources at scale, largely because of its truly on-demand nature. Cluster Compute Instances no doubt made AWS an even more appealing option thanks to its high-throughput, low-latency network, and GPU Instances could be the icing on the cake. Update: AWS’s Cluster Compute Instances rank No. 231 in the latest Top500 supercomputer list.
GPUs are great for tasks like video rendering and image modeling, as well as for churning through calculations in certain financial simulations. Skilled programmers might even write applications that offload only certain application tasks to GPU Instances, while standard Cluster Compute Instances handle the brunt of the work. This is an increasingly common practice in heterogeneous HPC systems, especially with specialty processors like IBM’s (s ibm) Cell Broadband Engine Architecture
AWS will be competing with other cloud providers for HPC business, though. In July, Peer1 Hosting rolled out its own Nvidia-powered cloud that aims to help developers add 3-D capabilities to web applications (although, as Om noted then, movie studios might comprise a big, if not primary, user base for such an offering). Also in July, New Mexico-based cloud provider Cerelink announced that it won a five-year deal with Dreamworks Animation to perform rendering on Cerelink’s Xeon-based HPC cluster. IBM (s ibm) has since ceased production of the aforementioned Cell processor, but its HPC prowess and new vertical-focused cloud strategy could make a CPU-GPU cloud for the oil and gas industry, for example, a realistic offering.
Still, it’s tough to see any individual provider stealing too much HPC business from AWS. By-the-hour pricing and the relative ease of programming to EC2 have had advanced users drooling for years, wishing only for it to provide the necessary performance. By and large, the combination of Cluster Compute and GPU Instances solves that problem, especially for ad hoc jobs or those that don’t require sharing data among institutions.
Amazon’s GPU Instances come as the annual Supercomputing conference kicks off in New Orleans, an event at which cloud computing has taken on a greater presence over the past few years. Already today, Platform Computing announced a collection of capabilities across its HPC product line to let customers manage and burst to cloud-based resources. Platform got into the cloud business last year with an internal-cloud management platform targeting all types of workloads.
Related content from GigaOM Pro (sub req’d):
- Cloud Computing Reaches the Final Frontier
- Supercomputers and the Search for the Exascale Grail
- We Can Call It a Cloud, But It’s Still Hardware