Abstract

For the large-scale linear discrete ill-posed problem min‖Ax−b‖ or Ax=b with b contaminated by Gaussian white noise, the Lanczos bidiagonalization based Krylov solver LSQR and its mathematically equivalent CGLS, the Conjugate Gradient (CG) method implicitly applied to ATAx=ATb, are most commonly used, and CGME, the CG method applied to min‖AATy−b‖ or AATy=b with x=ATy, and LSMR, which is equivalent to the minimal residual (MINRES) method applied to ATAx=ATb, have also been choices. These methods exhibit typical semi-convergence feature, and the iteration number k plays the role of the regularization parameter. However, there has been no definitive answer to the long-standing fundamental question: Can LSQR and CGLS find 2-norm filtering best possible regularized solutions? The same question is for CGME and LSMR too. At iteration k, LSQR, CGME and LSMR compute different iterates from the samek dimensional Krylov subspace. A first and fundamental step towards answering the above question is to accurately estimate the accuracy of the underlying k dimensional Krylov subspace approximating the k dimensional dominant right singular subspace of A. Assuming that the singular values of A are simple, we present a general sinΘ theorem for the 2-norm distances between these two subspaces and derive accurate estimates on them for severely, moderately and mildly ill-posed problems. We also establish some relationships between the smallest Ritz values and these distances. Numerical experiments justify the sharpness of our results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call