Abstract

This paper studies the optimality of the restriction and prolongation operators in the geometric multigrid method (GMG). GMG is used in solving discretized partial differential equation (PDE) and it relies greatly on the restriction and prolongation operators. Many methods to find these operators were proposed, but most of them have limited optimality proofs. To study their optimality we introduce stochastic convergence functional, which estimates the spectral radius of the iteration matrix for given GMG parameters. We implement the GMG method in a modern machine learning framework that can automatically compute the gradients of the introduced convergence functional with respect to restriction and prolongation operators. Therefore, we can minimize the proposed functional starting from some initial parameters and get better ones after some iterations of stochastic gradient descent. To illustrate the performance of the proposed approach, we carry out experiments on the discretized Poisson equation, Helmholtz equation and singularly perturbed convection–diffusion equation and demonstrate that proposed approach gives operators, which lead to faster convergence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call