1 Comment

Summary:

Google and national laboratories want different things out of their infrastructure, although it looks like there’s room for them to learn each other.

The heavy hitters of the internet might feel pretty good about themselves for figuring out how to handle big data sets, but national laboratories have been accustomed to managing exabyte-scale loads for years.

It would have been wise for developers at webscale properties to check with government supercomputing experts. Before the Hadoop Distributed File System and the Google File System hit the scene, there were things like Lustre and the Parallel Virtual File System, said Gary Grider, high-performance computing division leader at Los Alamos National Laboratory, at GigaOM’s Structure conference in San Francisco on Thursday.

“Really, if you reduce the semantics, … they would do the same thing, roughly,” he said. “It’s fascinating how we don’t work together as much as we should. If we worked together we probably would be further down the road than we are.”

Sure, there are differences in goals and culture. Los Alamos and Lawrence Livermore national labs use their supercomputers to design and simulate the use of nuclear weapons, which entails “guns and guards and gates and classified computing,” Grider said. But all the same, opportunities exist to share ideas about energy efficiency, storage innovations and other aims.

Check out the rest of our Structure 2013 live coverage here, and a video embed of the session follows below:


A transcription of the video follows on the next page
page of 2
  1. Reblogueó esto en Que Hay Dentro De…y comentado:
    New Reblog

    Share

Comments have been disabled for this post