Abstract

Several recent works have developed a new, probabilistic interpretationfor numerical algorithms solving linear systems in which the solution is inferred ina Bayesian framework, either directly or by inferring the unknown action of thematrix inverse. These approaches have typically focused on replicating the behaviourof the conjugate gradient method as a prototypical iterative method. In this work,surprisingly general conditions for equivalence of these disparate methods arepresented. We also describe connections between probabilistic linear solvers andprojection methods for linear systems, providing a probabilistic interpretation of afar more general class of iterative methods. In particular, this provides such aninterpretation of the generalised minimum residual method. A probabilistic view ofpreconditioning is also introduced. These developments unify the literature onprobabilistic linear solvers and provide foundational connections to the literatureon iterative solvers for linear systems.

Highlights

  • Consider the linear system Ax∗ = b (1)where A ∈ Rd×d is an invertible matrix, b ∈ Rd is a given vector, and x∗ ∈ Rd is an unknown to be determined

  • Matrix-based and solution-based inference were shown to be equivalent in a particular regime, showing that results from SBI transfer to matrix-based inference (MBI) with left-multiplied observations

  • Since SBI is a special case of MBI, future research will establish what additional benefits the increased generality of MBI can provide

Read more

Summary

Introduction

Where A ∈ Rd×d is an invertible matrix, b ∈ Rd is a given vector, and x∗ ∈ Rd is an unknown to be determined. On the surface the approaches in these two works appear different: in the matrix-based inference (MBI) approach of Hennig (2015), a posterior is constructed on the matrix A−1, while in the solution-based inference (SBI) method of Cockayne et al (2018) a posterior is constructed on the solution vector x∗. These algorithms are instances of probabilistic numerical methods (PNM) in the sense of Hennig et al (2015) and Cockayne et al (2017). We provide a new, probabilistic interpretation of preconditioning as a form of prior information

Contribution
Notation
Probabilistic linear solvers
Background on Gaussian conditioning
Solution-based inference
Matrix-based inference
Equivalence of MBI and SBI
Remarks
Projection methods as inference
Background
Probabilistic perspectives
Preconditioning
Conjugate gradients
Left-multiplied view
Right-multiplied view
GMRES GMRES computes the iterate
Arnoldi’s method
Simulation study
Discussion
The symmetric Kronecker product
Theorem 16
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.