Abstract

The two papers in this issue are of the analytic kind. The first one deals with symbol calculus, and the second one with compressed sensing. In their paper “Discrete Symbol Calculus,” Laurent Demanet and Lexing Ying propose efficient representations for functions of elliptic operators. The idea is to replace the symbol of an operator matrix A by a low-rank approximation. Then one can derive an efficient representation for $f(A)$, where f is a function like inverse, square root, or exponential. The low-rank approximations are constructed from rational Chebyshev interpolants, and also from hierarchical spline interpolants. The authors analyze how many terms are required for a low-rank approximation of specified accuracy, and they present numerical experiments. This appears to be an original and promising approach for reflection seismology and other areas of wave propagation. Compressed sensing is one of mathematics' hot topics. And it has already made contributions to signal processing: Compressed sensing can reduce the time for an MRI scan by a factor of 7. This is remarkable when you imagine that a small child may be able to hold still for 1 minute, but rarely for an eternity of 7. So what is compressed sensing? Suppose we want to determine (“measure”) all N elements of an unknown vector x. The straightforward way would be to perform N inner products with canonical vectors, with each canonical vector being responsible for one element of x. If x is sparse with only k nonzero elements, and we know the positions of these nonzero elements, then k inner products suffice to measure x. But what if we don't know the positions of the k nonzero elements? Can we still measure x with about k inner products? Look to compressed sensing for answers, and you'll be advised as follows: Perform the inner products in parallel, by means of a matrix vector multiplication $Ax$. However, for the rows of A don't choose canonical vectors, but instead choose, say, random vectors. The resulting algorithm for measuring x consists of two steps: The first (“encoding”) step determines a vector y from the matrix vector product $y=Ax$. There upon A and y are fed as constraints to the second (“decoding”) step, which recovers x as the solution of the $\ell_1$ minimization problem $\min_z{\|z\|_1}$ subject to the constraint $Az=y$. The performance of A in recovering x can be quantified from a RIP constant, where “RIP” stands for “restricted isometry property.” The RIP constant of A indicates how much A can deviate from behaving like an isometry when applied to vectors with k nonzero elements. More precisely, the RIP constant indicates by how much A can distort the two norm of a vector with k nonzero elements. RIP constants are the topic of the second paper, “Compressed Sensing: How Sharp Is the Restricted Isometry Property?” by Jeffrey Blanchard, Coralia Cartis, and Jared Tanner. They present tight bounds on RIP constants, and they introduce the framework of proportional growth arithmetic in order to compare RIP bounds to two alternative performance metrics: polytope analysis and geometric functional analysis techniques. This is an exciting paper and an exciting area, combining tools from matrix theory, probability, and functional analysis!

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.