Abstract

The problem of 1-bit compressive sampling is addressed in this paper. We introduce an optimization model for reconstruction of sparse signals from 1-bit measurements. The model targets a solution that has the least l 0-norm among all signals satisfying consistency constraints stemming from the 1-bit measurements. An algorithm for solving the model is developed. Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the l 0-norm and to solve resulting problems by exploiting the proximity operator. We examine the performance of our proposed algorithm and compare it with the renormalized fixed point iteration (RFPI) (Boufounos and Baraniuk, 1-bit compressive sensing, 2008; Movahed et al., A robust RFPI-based 1-bit compressive sensing reconstruction algorithm, 2012), the generalized approximate message passing (GAMP) (Kamilov et al., IEEE Signal Process. Lett. 19(10):607–610, 2012), the linear programming (LP) (Plan and Vershynin, Commun. Pure Appl. Math. 66:1275–1297, 2013), and the binary iterative hard thresholding (BIHT) (Jacques et al., IEEE Trans. Inf. Theory 59:2082–2102, 2013) state-of-the-art algorithms for 1-bit compressive sampling reconstruction.

Highlights

  • Compressive sampling is a recent advance in signal acquisition [1, 2]

  • It has been demonstrated that the sparse signal x can be recovered exactly from y if has Gaussian i.i.d. entries and satisfies the restricted isometry property [2]

  • We introduce a new 0 minimization model over a convex set determined by consistency constraints for 1-bit compressive sampling recovery and develop an algorithm for solving the proposed model

Read more

Summary

Introduction

Compressive sampling is a recent advance in signal acquisition [1, 2]. It provides a method to reconstruct a sparse signal x ∈ Rn from linear measurements y = x,. To further study properties of the sequence {x(k) : k ∈ N} generated by Algorithm 1, the matrix B is required to have the range space property (RSP) which is originally introduced in [23] With this property and motivated by the work in [23], we prove that Algorithm 1 can yield a sparse solution for model (12). The result shows that if the transpose of the matrix B in Algorithm 1 possesses the RSP, Algorithm 1 can lead to a sparse solution for model (12) To this end, we define a mapping σ : Rd → Rd such that the ith component of the vector σ (u) is the ith largest component of |u|. With Lemma 5, the result shows that when the transpose of B satisfies the RSP, there exists a cluster point of the sequence generated by Algorithm 1 that is sparse and satisfies the consistency condition.

Performance of Algorithm 3
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.