Abstract
The two papers in this issue are about short recurrences for computing bases of Krylov subspaces, and numerical methods for computing the Laplace transform and its inverse. Systems of linear equations $Ax=b$, where the matrix A is large and sparse (i.e., only a few elements of A are nonzero), are often solved by so-called Krylov subspace methods. Such methods restrict operations involving A to matrix vector products. In the simplest case the iterates $x^{(k)}$ are computed from vectors in the Krylov space $\mathcal{K}_k=span\{b, Ab,\ldots, A^{k-1}b\}$. For instance, when A is Hermitian positive-definite (or real symmetric positive-definite) the conjugate gradient method computes $x^{(k)}$ as a linear combination of $x^{(k-1)}$ and a “direction vector” $p_k$. The direction vectors form an A-orthogonal basis for the Krylov space $\mathcal{K}_k$, that is, $p_i^*Ap_j=0$ for $i\neq j$ (the superscript $*$ denotes the conjugate transpose). As a consequence, a direction vector $p_k$ can be computed from $Ap_{k-1}$, $p_{k-1}$ and $p_{k-2}$. This is called a 3-term recurrence. However, if A is a general matrix, then it is well known that the direction vectors cannot be computed with 3-term recurrences—even if one relaxes the orthogonality to B-orthogonality, where B is any Hermitian positive-definite matrix. The question is, if 3-term recurrences are not possible, then how short can the recurrences possibly be? In their paper “On Optimal Short Recurrences for Generating Orthogonal Krylov Subspace Bases,” J. Liesen and Z. Strakoš derive necessary and sufficient conditions for a nonsingular matrix A to admit $(s+2)$-term recurrences for $s\geq 1$. They also give a comprehensive overview of work on short recurrences for Krylov subspace methods. This is a clear and carefully written paper, and the authors go to great lengths to illuminate the subtle issues involved. In the second paper, “The Bad Truth about Laplace's Transform,” Charles Epstein and John Schotland are concerned with the difficulties of inverting the Laplace transform. This may be necessary, for instance, when solving inverse scattering problems from optical tomography and image reconstruction. The Laplace transform of a real function $f(x)$ is defined as the integral $\mathcal{L}\>f(x)=\int_0^{\infty}{e^{-xy}f(y)dy}.$ Inverting $\mathcal{L}$ to recover $f(x)$ is an ill-posed problem. Ill-posed problems are extremely hard to solve numerically because they may not have a solution, the solution may not be unique, or the solution may not depend continuously on the data. The authors use harmonic analysis to derive fast algorithms for approximating the Laplace transform and its inverse (when the function values are sampled on geometrically uniformly spaced data), and to derive regularized inverses.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.