Multigrid methods have proven to be an invaluable tool to efficiently solve large sparse linear systems arising in the discretization of Partial Differential Equations (PDEs). Algebraic multigrid methods and in particular adaptive algebraic multigrid approaches have shown that multigrid efficiency can be obtained without having to resort to properties of the PDE. Yet the required setup of these methods poses a not negligible overhead cost. Methods from machine learning have attracted attention to streamline processes based on statistical models being trained on the available data. Interpreting algebraically smooth error as an instance of a Gaussian process, we develop a new, data driven approach to construct adaptive algebraic multigrid methods. Based on Gaussian a priori distributions, kriging interpolation minimizes the mean squared error of the a posteriori distribution, given the data on the coarse grid. Going one step further, we exploit the quantification of uncertainty in the Gaussian process model in order to construct efficient variable splittings. Using a semivariogram fit of a suitable covariance model we demonstrate that our approach yields efficient methods using a single algebraically smooth vector.
Read full abstract