Intel on Monday announced a new brawny HPC processor and a family of network fabric components that incorporates silicon photonics technology. The new chip,…
Quantum computing alone may not be a panacea to all computing problems. However, when paired with high-performance computing, it may be able to find better answers to some of our most pressing issues.
Bina Technologies, a company aiming to make genome processing faster and cheaper, has accepted $1.75 million for its Series B round, which…
A scale-out storage company called Scality has raised a $22M Series C round and claims big growth, but it’s competing in a very tight space.
According to Amazon Web Services Chief Data Scientist Matt Wood, big data and cloud computing are nearly a match made in heaven. Limitless, on-demand and inexpensive resources open up new worlds of possibility, and a central platform makes it easy for communities to share huge datasets.
Working with Schrödinger, which specializes in computational drug design, Cycle Computing built a 50k-core AWS cluster that screened 21 million compounds in less than three hours. The cluster enabled the company to use a much more accurate screening process than other technology.
It’s easier than ever to build a web or mobile app and call yourself a startup. But with new funding opportunities and technology tools, entrepreneurs can easily — and cheaply — use technology to solve larger problems, rather than create another lifestyle app.
We often ascribe life-changing powers to high-speed Internet connections in our personal lives, but can they cure cancer? The Chan Soon-Shiong Institute for Advanced Health thinks so, and it’s investing hundreds of millions of dollars in a nationally distributed computing system to make it happen.
The face of high-performance computing is changing. That means new technologies and new names, but also familiar names in new places. Anyone that doesn’t have a cloud computing story to tell, possibly a big data one too, might starting looking really old really quickly.
There are big cloud server instances, and then there are big cloud server instances. Storm On Demand’s new 96GB, 32-core instance is of the latter variety. In fact, it’s the biggest you’re likely to find anywhere, and it’s designed with maximum I/O performance in mind.
LexisNexis is releasing a set of open-source, data-processing tools it says outperforms Hadoop and even handles workloads that Hadoop presently cannot. There have been calls for a legitimate alternative to Hadoop, and this certainly looks like one.
Two of Amazon Web Services’ most-unique features have finally crossed paths with news this morning that Spot Instances are now available for Cluster Compute Instances. Spot Instances have always been ideal for ad hoc batch-processing jobs, which often run atop on-premise grids or clusters.
Microsoft is developing a new big data tool called Dryad. Dryad and the associated programming model, DryadLINQ, simplify the process of running data-intensive applications across hundreds, or even thousands, of machines running Windows HPC Server. Dryad builds upon lessons learned from Hadoop, but differs in some significant ways.
Among the most interesting cloud discussions around the web today were those about what we learned about cloud computing in 2010, how Net Neutrality will affect the delivery of cloud services and what cloud providers presently offer the most-complete portfolios.
The cloud provides resources that organizations requiring HPC have never had access to before without buying their own clusters. GPUs are everywhere and proving adept at boosting performance. It seems likely that future HPC architectures will be a lot more virtual and a lot less CPU-centric.
Amazon Web Services upped its HPC portfolio by offering servers that will run GPUs. The move comes on the heels of AWS releasing its Cluster Compute Instances, and validates the idea that specialized hardware may be better suited for certain types of computing in the cloud.