Abstract
In compressed sensing one uses known structures of otherwise unknown signals to recover them from as few linear observations as possible. The structure comes in form of some compressibility including different notions of sparsity and low rankness. In many cases convex relaxations allow to efficiently solve the inverse problems using standard convex solvers at almost-optimal sampling rates. A standard practice to account for multiple simultaneous structures in convex optimization is to add further regularizers or constraints. From the compressed sensing perspective there is then the hope to also improve the sampling rate. Unfortunately, when taking simple combinations of regularizers, this seems not to be automatically the case as it has been shown for several examples in recent works. Here, we give an overview over ideas of combining multiple structures in convex programs by taking weighted sums and weighted maximums. We discuss explicitly cases where optimal weights are used reflecting an optimal tuning of the reconstruction. In particular, we extend known lower bounds on the number of required measurements to the optimally weighted maximum by using geometric arguments. As examples, we discuss simultaneously low rank and sparse matrices and notions of matrix norms (in the "square deal" sense) for regularizing for tensor products. We state an SDP formulation for numerically estimating the statistical dimensions and find a tensor case where the lower bound is roughly met up to a factor of two.
Highlights
The recovery of an unknown signal from a limited number of observations can be more efficient by exploiting compressibility and a priori known structure of the signal
This is another semi-norm for which one can find tractable semidefinite programming relaxations based on so-called θ -bodies [8]. These norms provide promising candidates for efficient and guaranteed reconstructions. Following this idea of atomic norm decompositions [2], a single regularizer was found by Richard et al [9] that yields again optimal sampling rates at the price that the reconstruction is not give by a tractable convex program
Such matrices occur in sparse phase retrieval1 [5, 17, 18], dictionary learning and sparse encoding [19], sparse matrix approximation [20], sparse PCA [21], bilinear compressed sensing problems like sparse blind deconvolution [22,23,24,25,26,27] or, more general, sparse self-calibration [28]
Summary
The recovery of an unknown signal from a limited number of observations can be more efficient by exploiting compressibility and a priori known structure of the signal. To mention some more recent directions, block-, group-, and hierarchical sparsity, low-rankness in matrix or tensor recovery problems and the generic concepts of atomic decomposition are important structures present in many estimation problems. In most of these cases, convex relaxations render the inverse problems itself amenable to standard solvers at almost-optimal sampling rates and show tractability from a theoretical perspective [1]. The l1-norm can be used to regularize for sparsity and the nuclear norm for low rankness of matrices
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have