A decade ago, scientists would collect data over a period of years, upload that data to a supercomputer, then wait for the opportunity to run it during a scheduled time. The process took months — or even years. Now, thanks to cheap processing power and the availability of compute clouds such as Amazon’s EC2 or Microsoft’s Azure, researchers can upload their data and start processing it immediately, as long as they can pay for the CPU time.
Even the government is using the cloud to process data. The National Oceanic and Atmospheric Association is using Amazon’s Web Services for its Ocean Observatories Initiative, a study surveying ocean temperatures to detect and predict climate change. And as James Watters, senior manager of cloud solutions development with VMware, notes, the coming trend will be moving your data to the cloud, rather than keeping it stored on hard drives or DVDs that are then uploaded to a supercomputer someplace, which makes the cloud a necessary tools for supporting future economies built around information.
Analyzing huge data sets with access to seemingly unlimited compute power isn’t just a benefit for climate researchers or those seeking to decode the latest H1N1 virus. The huge amount of digital information generated by financial monitoring companies, our interactions with people and products on the web, and government data (to name a few examples) is something that analysts, app developers and average citizens can all benefit from. The challenge is making that data intelligible and accessible.