5 Comments

Summary:

Out of all the carbon-free power options, nuclear power faces some of the highest hurdles to commercial-scale deployment. But that’s also the reason why supercomputing has started to truly revolutionize the world of nuclear power innovation.

Out of all the carbon-free power options, nuclear power faces some of the highest hurdles to commercial-scale deployment. The upfront costs for reactors are in the billions, the projects take years to site and build, and nuclear materials and designs have to undergo testing for decades to make sure they can be used in the field. That’s one reason why nuclear research costs a whole lot of money and the pace of innovation seems incredibly slow. But that’s also the reason why supercomputing has started to truly revolutionize the world of nuclear power innovation.

Supercomputing, or “extreme computing” as the Department of Energy described it during a workshop on computing and nuclear power last year, involves computers at the petaflop scale. It will eventually reach even exaflop scale. A computer running at a petaflop can do 1 million billion calculations in a second, and an exaflop of performance can deliver a billion billion calculations per second (see Supercomputers and the Search for the Exascale Grail, subscription required).

That massive amount of number crunching can help developers of nuclear power technology simulate next-generation designs of nuclear reactors, show how advanced fuels in a reactor could be consumed over time, and model more efficient waste disposal and refueling efforts. It’s all about being able to go through very complex and lengthy research and development processes much more quickly and with far less cost compared to both physical testing and using less powerful computers.

TerraPower, the nuclear startup backed by Microsoft Chairman Bill Gates that is working on a traveling wave reactor design, has leaned heavily on supercomputing to design and model its reactor and the lifecycle of the fuel. The TerraPower team says they are using “1,024 Xeon core processors assembled on 128 blade servers,” which is a cluster that is “over 1,000 times the computational ability as a desktop computer.”

Intellectual Ventures, which is led by former Microsoft chief technology officer Nathan Myhrvold and which spun out TerraPower, explains the importance of computer modeling for nuclear power on its web site as:

Extensive computer simulations and engineering studies produced new evidence that a wave of fission moving slowly through a fuel core could generate a billion watts of electricity continuously for well over 50 to 100 years without enrichment or reprocessing. The hi-fidelity results made possible by advanced computational abilities of modern supercomputer clusters are the driving force behind one of the most active nuclear reactor design teams in the country.

Supercomputing can also help extend the lives of more traditional nuclear reactors, can make them more efficient and safer, and, as the costs come down, can be used by average scientists. The Department of Energy has been developing the Nuclear Energy Modeling and Simulation Hub, which is intended to help nuclear engineers use computing for predicting how nuclear reactors can be extended and upgraded. (The deadline to apply for the DOE hub project is coming up this month.)

This type of computing for nuclear power has in the past mostly been used by computing specialists. But in the future, through programs like the Nuclear Energy Modeling and Simulation Hub, more scientists can model virtual reactors running under different scenarios and safety conditions.

One ironic twist to this whole equation — petascale and exascale computing requires lots of power to run. As David Turek, VP of Deep Computing, IBM put it to us for our article Supercomputers and the Search for the Exascale Grail (subscription required):

“Today the energy required for memory is still measured in kilowatts: but exascale memory takes 80 megawatts and then you add 60 or 80 megawatts for interconnect and the whole energy profile would be measured in gigawatts. You’re going to need a nuclear power plant to run one of these.”

  1. @Katie — If you haven’t already done so, please update your cost comparisons of various energy production methods at http://www.bravenewclimate.com — on the left hand side is the TCASE series. Very well done by the crew there.

    Also, look at http://www.energyfromthorium.com as another example of technology that we developed but allowed to stand aside for various political reasons. Kirk Sorensen would provide an excellent interview.

    Thank you for an otherwise informative article.

    Share
  2. @docforesight – thanks! I’ll look into these.

    Share
  3. [...] How Supercomputing is Revolutionizing Nuclear Power – Earth2Tech [...]

    Share
  4. An exascale supercomputer using CUDA processing cards (Nvidia Tesla) instead of CPUs would not require gigawatts.

    120W per GPU w/ 1Gb of memory = 1 terraflop
    *1000= 120Kw = 1 petaflop
    *1000=120Mw = 1 exaflop

    Obviously the infrastructure would not be the same as the standard 3 card + CPU desktop supercomputer Nvidia markets, they have server rack designs as well.

    No way it is going to push the power consumption into gigawatts. That level of usage only makes sense if using traditional CPUs for floating operations calucations (something they’re not all that good at), instead of GPUs.

    Share
  5. [...] put a software guy on an energy problem it becomes a software problem, says Myrvold, and TerraPower relies heavily on super computing. The problem with todays nuclear plants is that they were designed using computers with the same [...]

    Share

Comments have been disabled for this post