High performance computing has entered the Exascale Age. Capable of performing over 1018 floating point operations per second, exascale computers, such as El Capitan, the National Nuclear Security Administration's first, have the potential to revolutionize the detailed in-depth study of highly complex science and engineering systems. However, in addition to these kind of whole machine “hero” simulations, exascale systems could also enable new paradigms in digital design by making petascale hero runs routine. Currently, untenable problems in complex system design, optimization, model exploration, and scientific discovery could all become possible. Motivated by the challenge of uncovering the next generation of robust high-yield inertial confinement fusion (ICF) designs, project ICECap (Inertial Confinement on El Capitan) attempts to integrate multiple advances in machine learning (ML), scientific workflows, high performance computing, GPU-acceleration, and numerical optimization to prototype such a future. Built on a general framework, ICECap is exploring how these technologies could broadly accelerate scientific discovery on El Capitan. In addition to our requirements, system-level design, and challenges, we describe some of the key technologies in ICECap, including ML replacements for multiphysics packages, tools for human-machine teaming, and algorithms for multifidelity design optimization under uncertainty. As a test of our prototype pre-El Capitan system, we advance the state-of-the art for ICF hohlraum design by demonstrating the optimization of a 17-parameter National Ignition Facility experiment and show that our ML-assisted workflow makes design choices that are consistent with physics intuition, but in an automated, efficient, and mathematically rigorous fashion.
Read full abstract