Abstract

Abstract In this article, we propose a weighted ℓ 2,1 minimization algorithm for jointly-sparse signal recovery problem. The proposed algorithm exploits the relationship between the noise subspace and the overcomplete basis matrix for designing weights, i.e., large weights are appointed to the entries, whose indices are more likely to be outside of the row support of the jointly sparse signals, so that their indices are expelled from the row support in the solution, and small weights are appointed to the entries, whose indices correspond to the row support of the jointly sparse signals, so that the solution prefers to reserve their indices. Compared with the regular ℓ 2,1 minimization, the proposed algorithm can not only further enhance the sparseness of the solution but also reduce the requirements on both the number of snapshots and the signal-to-noise ratio (SNR) for stable recovery. Both simulations and experiments on real data demonstrate that the proposed algorithm outperforms the ℓ 1-SVD algorithm, which exploits straightforwardly ℓ 2,1 minimization, for both deterministic basis matrix and random basis matrix.

Highlights

  • In recent years, sparse signal recovery has attracted a great deal of attention from the signal processing society [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]

  • We focus on the noisy measurement vectors (MMV) case and propose an algorithm of the jointly-sparse signal recovery based on the relationship between the noise subspace and the overcomplete basis for weighting the jointly sparse signals, which extends the essence of the iterative reweighted l1 minimization in [12] from single measurement vector (SMV) to MMV

  • 5 Conclusion In this article, we proposed an effective weighted l2,1 minimization algorithm that exploits the relationship between the noise subspace and the overcomplete basis matrix to obtain the weights for the jointly-sparse signal recovery problem

Read more

Summary

Introduction

Sparse signal recovery has attracted a great deal of attention from the signal processing society [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]. The number of sources can be determined by exploiting the information theoretic criterion such as the Akaike’s information criterion (AIC) [26] and the minimum description length (MDL) criterion [27] These methods require the eigenvalue of the sample correlation matrix. In some situations, for example, when the signal-to-noise ratio (SNR) is very low or the number of snapshots is very small, the classical AIC and MDL rules are likely to overestimate or underestimate the number of sources. Another interesting issue is the robustness of the proposed SW l2,1-SVD algorithm to the estimate of the number of sources. The overcomplete basis matrix A =[a(j1), ..., a (jK)] is a deterministic basis matrix under this condition, where the vector a(jk) denotes the array steering vector and jk is the kth sampling grid

Localization accuracy
Sparse recovery for random basis matrix
High resolution radar imaging via sparse recovery
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call