Abstract

The advent of high-performance computers with vastly increased capabilities compared to todays supercomputers opens up tremendous possibilities for the calculation of physical problems without significant modeling assumptions. This is the route that most exascale approaches are being envisioned today, so as to harness the awesome capability that will be offered to the computational scientist and engineer alike. But a whole different set of uses of exascale computing can be envisioned if the applications are in design and MultiDisciplinary Analysis and Optimization (MDAO): the complexity does not always come from ever-increasing mesh sizes and increasing fidelity in the physics but, rather, from the fact that very large numbers of simulations may need to be carried out in large-dimensional parameter spaces (design optimization an iterative process in nature), uncertainty quantification (where many samples may need to be accomplished), or even combinations of the two such as in design under uncertainty and reliability-based design. Whereas, when faced with leveraging exascale computers for very large individual calculations, the focus is on discovering parallel approaches to improve efficiency, scalability, and turnaround time of the simulations, when attempting MDAO at exascale, the difficulty of discovering and exploiting parallelism is significantly increased (with notable exceptions). This paper focuses on a very personal view, based on years of experience pursuing large-scale multi-physics simulations and MDAO, of the most likely uses of exascale computing platforms in the engineering design discipline, as well as the bottlenecks that may be faced and that require research efforts today so that we are prepared to leverage future computers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call