High-performance computing

Why Amazon thinks big data was made for the cloud

According to Amazon Web Services Chief Data Scientist Matt Wood, big data and cloud computing are nearly a match made in heaven. Limitless, on-demand and inexpensive resources open up new worlds of possibility, and a central platform makes it easy for communities to share huge datasets.

Cycle Computing spins up 50K core Amazon cluster

Working with Schrödinger, which specializes in computational drug design, Cycle Computing built a 50k-core AWS cluster that screened 21 million compounds in less than three hours. The cluster enabled the company to use a much more accurate screening process than other technology.

It’s time for startup founders to think bigger

It’s easier than ever to build a web or mobile app and call yourself a startup. But with new funding opportunities and technology tools, entrepreneurs can easily — and cheaply — use technology to solve larger problems, rather than create another lifestyle app.

Fighting cancer at 100 Gigabits per second

We often ascribe life-changing powers to high-speed Internet connections in our personal lives, but can they cure cancer? The Chan Soon-Shiong Institute for Advanced Health thinks so, and it’s investing hundreds of millions of dollars in a nationally distributed computing system to make it happen.

Meet the new breed of HPC vendor

The face of high-performance computing is changing. That means new technologies and new names, but also familiar names in new places. Anyone that doesn’t have a cloud computing story to tell, possibly a big data one too, might starting looking really old really quickly.

Storm courts I/O lovers with 96GB, 32-core cloud server

There are big cloud server instances, and then there are big cloud server instances. Storm On Demand’s new 96GB, 32-core instance is of the latter variety. In fact, it’s the biggest you’re likely to find anywhere, and it’s designed with maximum I/O performance in mind.

LexisNexis open-sources its Hadoop killer

LexisNexis is releasing a set of open-source, data-processing tools it says outperforms Hadoop and even handles workloads that Hadoop presently cannot. There have been calls for a legitimate alternative to Hadoop, and this certainly looks like one.

Amazon Makes Cloud-Based Clusters Even Cheaper

Two of Amazon Web Services’ most-unique features have finally crossed paths with news this morning that Spot Instances are now available for Cluster Compute Instances. Spot Instances have always been ideal for ad hoc batch-processing jobs, which often run atop on-premise grids or clusters.

With Dryad, Microsoft Is Trying to Democratize Big Data

Microsoft is developing a new big data tool called Dryad. Dryad and the associated programming model, DryadLINQ, simplify the process of running data-intensive applications across hundreds, or even thousands, of machines running Windows HPC Server. Dryad builds upon lessons learned from Hadoop, but differs in some significant ways.

Jan. 4: What We’re Reading About the Cloud

Among the most interesting cloud discussions around the web today were those about what we learned about cloud computing in 2010, how Net Neutrality will affect the delivery of cloud services and what cloud providers presently offer the most-complete portfolios.

Clouds and GPUs: The Future of HPC

The cloud provides resources that organizations requiring HPC have never had access to before without buying their own clusters. GPUs are everywhere and proving adept at boosting performance. It seems likely that future HPC architectures will be a lot more virtual and a lot less CPU-centric.

Amazon Gets Graphic With Cloud GPU Instances

Amazon Web Services upped its HPC portfolio by offering servers that will run GPUs. The move comes on the heels of AWS releasing its Cluster Compute Instances, and validates the idea that specialized hardware may be better suited for certain types of computing in the cloud.