Abstract

We develop a novel, fundamental, and surprisingly simple randomized iterative method for solving consistent linear systems. Our method has six different but equivalent interpretations: sketch-and-project, constrain-and-approximate, random intersect, random linear solve, random update, and random fixed point. By varying its two parameters---a positive definite matrix (defining geometry), and a random matrix (sampled in an independent and identically distributed fashion in each iteration)---we recover a comprehensive array of well-known algorithms as special cases, including the randomized Kaczmarz method, randomized Newton method, randomized coordinate descent method, and random Gaussian pursuit. We naturally also obtain variants of all these methods using blocks and importance sampling. However, our method allows for a much wider selection of these two parameters, which leads to a number of new specific methods. We prove exponential convergence of the expected norm of the error in a single theorem, from which existing complexity results for known variants can be obtained. However, we also give an exact formula for the evolution of the expected iterates, which allows us to give lower bounds on the convergence rate.

Highlights

  • The need to solve linear systems of equations is ubiquitous in essentially all quantitative areas of human endeavour, including industry and science

  • Linear systems are a central problem in numerical linear algebra, and play an important role in computer science, mathematical computing, optimization, signal processing, engineering, numerical analysis, computer vision, machine learning, and many other fields

  • Research into the Kaczmarz method was in 2009 reignited by Strohmer and Vershynin [38], who gave a brief and elegant proof that a randomized thereof enjoys an exponential error decay. This has triggered much research into developing and analyzing randomized linear solvers. It should be mentioned at this point that the randomized Kaczmarz (RK) method arises as a special case of the stochastic gradient descent (SGD) method for convex optimization which can be traced back to the seminal work of Robbins and Monro’s on stochastic approximation [31]

Read more

Summary

Introduction

The need to solve linear systems of equations is ubiquitous in essentially all quantitative areas of human endeavour, including industry and science. Motivated by the results of Strohmer and Vershynin [38], Leventhal and Lewis [19] utilize similar techniques to establish the first bounds for randomized coordinate descent methods for solving systems with positive definite matrices, and systems arising from least squares problems [19]. These bounds are similar to those for the RK method. We pick the solution which is closest to xk

Optimization Viewpoint
Algebraic viewpoint
Analytic viewpoint
Special Cases
Convergence
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call