1 Comment

Summary:

The cloud provides resources that organizations requiring HPC have never had access to before without buying their own clusters. GPUs are everywhere and proving adept at boosting performance. It seems likely that future HPC architectures will be a lot more virtual and a lot less CPU-centric.

appro_llnl_edge_cluster

Lawrence Livermore National Laboratory's GPU-powered Edge cluster.

After last week’s cloud- and GPU-heavy Supercomputing conference, it’s fair to ask whether high-performance computing will ever look the same. The cloud provides on-demand (and sometimes free) resources that organizations requiring HPC resources have never had access to before without buying their own clusters. GPUs are everywhere and proving adept at seriously boosting performance for certain types of code. As I discuss in my Weekly Update at GigaOM Pro, it seems likely that HPC architectures of the future might be a lot more virtual and a lot less CPU-centric.

Think about it: Even without AWS Cluster Compute and GPU Instances, certain scientists were starting to use the cloud fairly heavily for certain tasks. Cluster Compute Instances upped the ante by introducing Intel Nehalem processors and a high-throughput, low-latency 10 GbE network. In the latest Top500 supercomputer list, AWS’s Cluster Compute Instances infrastructure ranked No. 231. GPU Instances utilize the same infrastructure, but add two Nvidia Tesla M2050 GPUs into the mix.

Aside from AWS, some HPC users might look to other cloud providers. Microsoft is now letting scientists use Windows Azure for free to run genomic queries using the National Center for Biotechnology Information BLAST tool, and OpSource just made available its eight-core, 64GB instances.

Even if HPC users don’t embrace cloud computing as heavily as I suspect, there can be little doubt they’ll embrace new GPU-powered architectures. In a field obsessed with speed, GPUs can seriously accelerate performance for massively parallel, multi-threaded workloads, as evidenced by their increasing prominence in leading supercomputers and mainstream servers.

I’m not an HPC analyst, but I have some idea how that industry functions. Performance sells, even if it comes at a price. In part, this is because high performance hasn’t previously been available without fairly significant cost. Cloud computing and GPUs change this. Now, HPC users can fully embrace the cloud value proposition of spending a lot less on IT and a lot more on doing business. I think the economics will be too good to pass up.

Read the full post here.

Related content from GigaOM Pro (sub req’d):

You’re subscribed! If you like, you can update your settings

  1. Big Data, ARM and Legal Concerns on the Rise in Q4: Cloud Computing News « Tuesday, January 25, 2011

    [...] x86 from all directions, and are beginning to steal workloads in traditionally x86-dominant fields, such as HPC. However, it will take years before ARM processors or GPUs can ever really make a dent in x86 [...]

Comments have been disabled for this post