Abstract
Krylov subspace methods are commonly used iterative methods for solving large sparse linear systems, however they suffer from communication bottlenecks on parallel computers. Therefore, $s$-step methods have been developed where the Krylov subspace is built block by block, so that $s$ matrix-vector multiplications can be done before orthonormalizing the block. Then Communication-Avoiding algorithms can be used for both kernels. This paper introduces a new variation on $s$-step GMRES in order to reduce the number of iterations necessary to ensure convergence, with a small overhead in the number of communications. Namely, we develop a $s$-step GMRES algorithm, where the block size is variable and increases gradually. Our numerical experiments show a good agreement with our analysis of condition numbers and demonstrate the efficiency of our variable $s$-step approach.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have