SummaryRecent advances in the field of machine learning open a new era in high performance computing for challenging computational science and engineering applications. In this framework, the use of advanced machine learning algorithms for the development of accurate and cost‐efficient surrogate models of complex physical processes has already attracted major attention from scientists. However, despite their powerful approximation capabilities, surrogate model predictions are still far from being near to the ‘exact’ solution of the problem. To address this issue, the present work proposes the use of up‐to‐date machine learning tools in order to equip a new generation of iterative solvers of linear equation systems, capable of very efficiently solving large‐scale parametrized problems at any desired level of accuracy. The proposed approach consists of the following two steps. At first, a reduced set of model evaluations is performed using a standard finite element methodology and the corresponding solutions are used to establish an approximate mapping from the problem's parametric space to its solution space using a combination of deep feedforward neural networks and convolutional autoencoders. This mapping serves as a means of obtaining very accurate initial predictions of the system's response to new query points at negligible computational cost. Subsequently, an iterative solver inspired by the Algebraic Multigrid method in combination with Proper Orthogonal Decomposition, termed POD‐2G, is developed that successively refines the initial predictions of the surrogate model towards the exact solution. The application of POD‐2G as a standalone solver or as preconditioner in the context of preconditioned conjugate gradient methods is demonstrated on several numerical examples of large scale systems, with the results indicating its strong superiority over conventional iterative solution schemes.
Read full abstract