Abstract

The groundbreaking theory of compressive sensing (CS) enables reconstructing many common classes or real-world signals from a number of samples that is well below that prescribed by the Shannon sampling theorem, which exclusively relates to the bandwidth of the signal. Differently, CS takes profit of the sparsity or compressibility of the signals in an appropriate basis to reconstruct them from few measurements. A large number of algorithms exist for solving the sparse recovery problem, which can be roughly classified in greedy pursuits and l 1 minimization algorithms. Chambolle and Pock's (C&P) primal-dual l 1 minimization algorithm has shown to deliver state-of-the-art results with optimal convergence rate. In this work we present an algorithm for l 1 minimization that operates in the null space of the measurement matrix and follows a Nesterov-accelerated gradient descent structure. Restriction to the null space allows the algorithm to operate in a minimal-dimension subspace. A further novelty lies on the fact that the cost function is no longer the l 1 norm of the temporal solution, but a weighted sum of its entropy and its l 1 norm. The inclusion of the entropy pushes the $l_{1}$ minimization towards a de facto quasi-10 minimization, while the l 1 norm term avoids divergence. Our algorithm globally outperforms C&P and other recent approaches for $l_{1}$ minimization in terms of l 2 reconstruction error, including a different entropy-based method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call