Motivated by the observation that a given signal ${x}$ admits sparse representations in multiple dictionaries ${\Psi}_d$ but with varying levels of sparsity across dictionaries, we propose two new algorithms for the reconstruction of (approximately) sparse signals from noisy linear measurements. Our first algorithm, Co-L1, extends the well-known lasso algorithm from the L1 regularizer $\Vert{\Psi x}\Vert_1$ to composite regularizers of the form $\sum_d \lambda_d \Vert{\Psi}_d {x}\Vert_1$ while self-adjusting the regularization weights $\lambda_d$ . Our second algorithm, Co-IRW-L1, extends the well-known iteratively reweighted L1 algorithm to the same family of composite regularizers. We provide several interpretations of both algorithms: 1) majorization-minimization (MM) applied to a non-convex log-sum-type penalty; 2) MM applied to an approximate $\ell_0$ -type penalty; 3) MM applied to Bayesian MAP inference under a particular hierarchical prior; and 4) variational expectation maximization (VEM) under a particular prior with deterministic unknown parameters. A detailed numerical study suggests that our proposed algorithms yield significantly improved recovery SNR when compared to their noncomposite L1 and IRW-L1 counterparts.
Read full abstract