Abstract

We appreciate the comments made by Deal & Nolet (1996, hereafter referred to as DN Berryman 19941. For large linear problems, the number of iterations needed to obtain an acceptable solution is smaller than the full SVD rank of the original matrix. Nonetheless, the extremal Ritz values are often good approximations to the corresponding singular values of the original matrix (Golub & Van Loan 1989, p.479), as shown by the examples in both ZM the computed Ritz values are spaced across the range of the singular values of the original matrix with a distribution defined by zeros of Jacobi polynomials (van der Sluis & van der Vorst 1990). To explore the solution subspace more fully, additional iterations are performed beyond that at which an acceptable solution is defined. After a sufficient number of iterations, Ritz values converge to singular values but with the consequences that Lanczos and Ritz vectors are no longer orthogonal; some duplicate Ritz values and vectors are generated. The selective orthogonalization method of Parlett & Scott (1979) addresses this loss of orthogonality at the cost of increased algorithm complexity and reduced numerical efficiency. We choose to address this problem by identifying and eliminating the duplicate Ritz values and Ritz vectors (also called Ritz pairs) (Scales 1989; Parlett 1980, p. 272), and to use the remaining (nearly) orthogonal Ritz pairs to construct our LSQRA solution. It should be noted that it is not necessary to approximate an LSQRA solution to get the resolution and covariance matrices; the Lanczos decomposition is sufficient.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call