The first Research Spotlights article in this issue is concerned with filtering, a task of paramount importance in a great many applications such as numerical weather prediction and geophysical data assimilation. Authors Alessio Spantini, Ricardo Baptista, and Youssef M. Marzouk, in their article “Coupling Techniques for Nonlinear Ensemble Filtering,” describe discrete-time filtering as the act of characterizing the sequence of conditional distributions of the latent field at observation times, given all currently available measurements. Despite the existing literature on filtering, issues such as high-dimensional state spaces and sparse (in both space and time) observations still prove formidable in practice. The traditional approach of ensemble-based data assimilation is the ensemble Kalman filter (EnKF), involving a prediction (forecasting) step followed by an analysis step. However, the authors note an intrinsic bias of EnKF due to the linearity of the transformation, estimated under Gaussian assumptions, that is used in the analysis step, which limits its accuracy. To overcome this, they propose two non-Gaussian generalizations of the EnKF---the so-called stochastic and deterministic map filters---using nonlinear transformations derived from couplings between the forecast distribution and the filtering distribution. What is crucial is that the transformations “can be estimated efficiently...perhaps using only convex optimization,” that they “are easy to `localize' in high dimensions,” and that their computation “should not become increasingly challenging as the variance of the observation noise decreases.” Following a comprehensive description of their new approaches, the authors demonstrate numerically the superiority of their stochastic map filter approach over traditional EnKF. The subsequent discussion offers the reader several jumping off points for future research. Recovery of a sparse solution to a large-scale optimization problem is another ubiquitous problem arising in many applications such as image reconstruction, signal processing, and machine learning. The cost functional typically includes a regularization term in the form of an $\ell_1$ norm term on the solution and/or regularized solution to enforce sparsity. Designing suitable algorithms for such recovery problems is the subject of our second Research Spotlights article. In “Sparse Approximations with Interior Point Methods,” authors Valentina De Simone, Daniela di Serafino, Jacek Gondzio, Spyridon Pougkakiotis, and Marco Viola set out to correct the misconception that first-order methods are to be preferred over second-order methods out of hand. Through case studies, they offer evidence that interior point methods (IPMs) which are constructed to “exploit special features of the problems in the linear algebra of IPMs” and which are designed “to take advantage of the expected sparsity of the optimal solution” can in fact be the method of choice for solving this class of optimization problems. The key to their approach is a reformulation of the original sparse approximation problem to one which is seemingly larger but which has properties upon which one can capitalize for computational gain. For each of four representative applications, the authors show how to take computational advantage of the problem-specific structure of the underlying linear systems involved at each iteration. These efforts are complemented by leveraging the expected sparsity: employing heuristics to drop near zero variables, thereby replacing very large, ill-conditioned intermediate systems by better conditioned, smaller systems. Their conclusion is that time invested in tailoring solvers to structure admitted by the reformulated variant, and in taking advantage of expected sparsity, may be well spent, since their demonstrations have shown it is possible for IPMs to have a “noticeable advantage” over state-of-the-art first-order methods for sparse approximation problems.
Read full abstract