2 Comments

Summary:

Supercomputer experts, including the chief information officer of NASA’s Ames Research Center and a computer strategist for the U.S. Army’s research and development center, said that scientists are still working towards developing an “exascale” computer — one that can do a million trillion calculations per second.

Supercomputer experts, including the chief information officer of NASA’s Ames Research Center and a computer strategist for the U.S. Army’s research and development center, said at the GigaOM Network’s Structure conference today that scientists are still working towards developing an “exascale” computer — that is, one that can do a million trillion calculations per second — to try and keep up with the flood of data that the world is producing every day, which continues to increase at exponential rates.

Chris Kemp, the chief information officer at Ames, said that the space program is also facing a storage issue that he called “the exabyte problem” when it comes to storing and processing the massive amounts of data it produces every day. “From Mars, we have Google Earth-resolution images coming down, and there are telescopes that generate an exabyte of data a day,” he said. “It gets to the point where you are forced to throw away data because you literally can’t store it all.”

But it’s not just NASA: Jason Hoffman, the chief technology officer of Joyent, noted that Apple recently announced it has sold 3 million iPads, “which means that they have basically shipped 100 petabytes of distributed storage in a matter of months.” When it comes to developing computers that can handle the processing of those kinds of data, however, science is running up against energy and cost issues, said John West, special assistant for computation strategy with the U.S. Army Engineer Research and Development Center.

The No. 1 supercomputing system today, he said, is the “Jaguar” system being used by the Oakridge National Laboratory of the Department of Energy to do climate modeling and other research. It can do 1.8 petaflops in terms of calculation ability, but it also sucks up about 7 megawatts of power to run, he said, and “these are $500-$100 million computers.” The Army scientist said that if you tried to get to exascale-level computing just by adding more processors, “you would be looking at 4 gigawatts of power and 125 million cores. We really can’t do this today.”

Chris Kemp also noted that having this much computing power has other impacts on storage and memory and other parts of a computer as well. “When you’re talking about that number of processors writing to memory, or writing to cache or writing to disk, it’s like the difference between driving across the street to Starbucks vs. going to the moon or going to Pluto in terms of the time it takes.”

http://cdn.livestream.com/grid/LSPlayer.swf?channel=gigaomtv&clip=pla_e91cb54d-d725-4a09-9cbb-7fa4a7cbc51a&autoPlay=false&mute=false

Watch live streaming video from gigaomtv at livestream.com
By Mathew Ingram

You're subscribed! If you like, you can update your settings

Related stories

  1. [...] Looking to Make Acquisitions, Still Copying Twitter and Facebook See All Articles » Structure 2010: The Quest for Exascale [...]

    Share
  2. [...] Kemp, NASA’s Chief Technology Officer for IT (who spoke at our Structure 2010 conference) told me that about two years ago, his organization got excited about the potential of cloud [...]

    Share

Comments have been disabled for this post