Abstract

Sparse recovery is a strategy for effectively reconstructing a signal by obtaining sparse solutions of underdetermined linear systems. As an important feature of a signal, sparsity is often measured by the ℓ1-norm. However, the approximate solutions obtained via ℓ1-norm regularization mostly underestimate the original signal. To overcome this defect, here we employ a class of nonconvex penalty functions proposed by Selesnick and Farshchian which preserve convexity of the cost function under certain conditions. To solve the problem, we suggest a nonmonotone modification of the generalized shrinkage conjugate gradient method proposed by Esmaeili et al., based on a modified secant equation. We establish global convergence of the method with standard suppositions. Numerical tests are made to shed light on performance of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call