The first Survey and Review article in this issue is “Symplectic Runge--Kutta Schemes for Adjoint Equations, Automatic Differentiation, Optimal Control, and More,” by J. M. Sanz-Serna. It looks at an aspect of geometric integration, a term first coined by the article's author in 1997 to describe an area that revitalized research in the numerical solution of ODEs. A classic 1987 paper of Cooper characterized those Runge--Kutta methods that preserve quadratic invariants---if the ODE has solutions that live on a sphere, then these Runge--Kutta methods produce discrete approximations that also live on a sphere. Soon after, Lasagni, Sanz-Serna, and Suris independently proved that the same set of Runge--Kutta methods are symplectic; as a consequence, when applied to a Hamiltonian problem, they (essentially) produce the exact solution to a nearby problem that is also of Hamiltonian form. The study of symplecticness from the perspective of backward errors and long time integration has since become a core activity in geometric integration. This review article, however, surveys and unifies a collection of results that hark back to Cooper's original work, showing that, notwithstanding symplecticness, many positive consequences arise from a method's ability to preserve quadratic invariants. Strong messages from this article are that (a) adjoint systems are a powerful tool for analysis, and (b) preservation of quadratic invariants (whether explicitly or under the hood) is a key to successful computation in this area. For those unfamiliar with adjoint equations, Figure 1 captures the main ideas in one concrete example. After introducing Lagrange multipliers in this context, the article describes results in automatic differentiation, optimal control, and classical mechanics. Overall, this work makes available and clarifies a range of contributions that have been scattered across the numerical analysis and applications literature. Our second article, “Approximating Spectral Densities of Large Matrices,” by Lin Lin, Yousef Saad, and Chao Yang, looks at a challenging linear algebra problem from physics. Computing one or more eigenvalues of a real, symmetric matrix $A$ is a classical problem in numerical analysis. The related density of states, or spectral density, problem surveyed here requires us to compute a continuous, nonnegative function, $\tilde \phi(t)$, that takes large values when $t$ is close to an eigenvalue of $A$. Computing the full spectrum of $A$ will immediately solve this problem, but in many realistic cases, that approach is not feasible. Before discussing numerical methods, the authors deal with the issue of judging the accuracy of $\tilde \phi(t)$. This is not a clearly defined task, since the target “function” is a linear combination of Dirac distributions, and the authors adopt a particular error measure that allows for different levels of resolution. Two classes of numerical method are then reviewed. The kernel polynomial method (KPM) expands $\tilde \phi(t)$ in terms of Chebyshev polynomials, with coefficients defined via moment matching. This approach is shown to be closely related to the “spectroscopic” method originally proposed by Lanczos. The second main class is built on the classical Lanczos tridiagonalization method. Throughout this article the authors adopt a numerical linear algebra perspective, comparing a number of variations within the two key classes and covering a range of practical issues, including blurring, damping, and preserving nonnegativity. The error measure and numerical methods are then illustrated on a range of large-scale examples. In addition to those directly involved in spectral density computation, this article will appeal more widely to readers interested in practical matrix computation.
Read full abstract