Abstract

Compressed sensing (CS) exploits the compressibility of natural signals to reduce the number of samples required for accurate reconstruction. The cost for sub-Nyquist sampling has been computationally expensive reconstruction algorithms, including large-scale l 1 optimization. Therefore, first-order optimization methods that exploit only the gradient of the reconstruction cost function have been developed; notable examples include iterative soft thresholding (IST), fast iterative soft thresholding algorithm (FISTA), and approximate message passing (AMP). The performance of these algorithms has been studied mainly in the standard framework of convex optimization, called the deterministic framework here. In this paper, we first show that the deterministic approach results in overly pessimistic conclusions that are not indicative of algorithm performance in practice. As an alternative to the deterministic framework, we second study the theoretical aspects of the statistical convergence rate, a topic that has remained unexplored in the sparse recovery literature. Our theoretical and empirical studies reveal several hallmark properties of the statistical convergence of first-order methods, including universality over the matrix ensemble and the least favorable coefficient distribution.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.