Sparse recovery has attracted considerable attention in signal processing community these years, because of its widespread usage in many applications. Though lots of convex and nonconvex methods have been proposed for this topic, most of them are computationally expensive. Recently, learned iterative shrinkage-thresholding algorithm (LISTA) and its variants are incorporated to sparse recovery, which are based on specific deep neural networks designed by unfolding ISTA. However, the problem they are able to solve is regularized by 11 norm, which is less effective at promoting sparseness than some nonconvex sparsity-inducing penalties (SIPs). Motivated by the fact that the problems regularized by these SIPs can be solved by proximal operator methods, we devise a network named learning proximal operator method (LePOM) for sparse recovery. For LePOM on a general class of SIPs, a necessary condition of its convergence is established. Based on this condition, a simplified network is proposed with less parameters. Theoretical analysis of this network inspires us to further reduce the number of parameters and arrive at proposing Analytical LePOM (ALePOM). ALePOM determines most parameters by solving an optimization problem and significantly reduces the number of parameters. Theoretical analysis shows if the signal is sufficiently sparse, ALePOM converges linearly. Simulations confirm our analyses and demonstrate that the proposed solutions outperform state-of-the-art sparse recovery algorithms and neural network-based methods.
Read full abstract