Abstract

Numerical simulations and optimisation methods, such as mesh adaptation, rely on the accurate and inexpensive use of error estimation methods. Adjoint-based error estimation is the most accurate method, and generally the most costly. A strong contributor to this cost is the need to compute a higher resolution adjoint solution. Here, it is proposed to use super-resolution neural networks to super-resolve a fine adjoint solution from a lower-cost coarse adjoint solution: a superAdjoint. The method is compared to reference error estimators on an unsteady Burgers’ equation using the method of manufactured solutions. Two forms of the superAdjoint were implemented, a twice and a four times refining super-resolution neural network. These were used to demonstrate both the computational cost reduction and the potential for the reduction of the storage footprint of the primal problem. The first, referred to as 2×CNN, was able to reconstruct the spatially enriched adjoint solution, thus providing a robust and inexpensive local output error. The second, the 4×CNN, was able to demonstrate the reconstruction ability of super-resolution neural networks for higher upscaling factors. This was leveraged in order to subsample the primal solution in space, thus reducing substantially the storage footprint of the discrete primal solution. Both superAdjoints could achieve the desired level of accuracy when compared to the reconstruction of the refined adjoint-based error estimate. Moreover, superAdjoint was shown to be able to generalise to a new QoI that was untrained for. This gives great confidence in the use of super-resolution neural networks for the reduction of both computational cost and storage requirements of adjoint-based error estimation, and goal-oriented mesh adaptation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call