Amazon Web Services (s amzn) has made running high-performance computing workloads in the cloud even less expensive by pairing Spot Instances and Cluster Compute Instances. Spot Instances are Amazon EC2 instances priced below face value depending on how much demand AWS has for spare capacity, and Cluster Compute Instances couple two quad-core Intel (s intl) Xeon-based servers with a 10 Gigabit Ethernet network to ensure low-latency transfer between nodes. This is a really cool development because Spot Instances have always been ideal for ad hoc batch-processing jobs, the likes of which often run atop on-premise grids or clusters.
Amazon’s Jeff Barr explains as much in his blog post on the new pairing, citing the example of Scribd used thousands of Spot Instances to build a grid for a single job. Not only did it save the capital expense of buying all that gear, but it also saved 63 percent, or $10,500, off the face value of EC2 instances. Running these same jobs on Cluster Compute Instances would save time because of the high-end processors and low-latency network, and also would let AWS users run larger jobs, more latency-sensitive jobs in the same amount of time while still reaping the rewards of lower Spot Instance rates. AWS sweetened the Cluster Compute product even further in November with the option of GPU Instances running Nvidia’s (s nvda) Tesla M2050 graphical-processing units. It’s quite literally like having a supercomputer available on demand. Using intelligent scheduling and workload management products, AWS customers could automate the process of running jobs using Spot Instances when the price aligns with the priority of a given workload.
With the exception of Enomaly’s SpotCloud service, AWS already had the spot-pricing-for-cloud-resources game to itself, and it’s the only mainstream cloud provider selling anything like Cluster Compute Instances, so this pairing of the two is just icing on the cake. But if you’re running CPU- and network-intensive jobs, it’s damn tasty icing.