Abstract

Compressed sensing is a signal processing technique in which data is acquired directly in a compressed form. There are two modeling approaches that can be considered: the worst-case (Hamming) approach and a statistical mechanism, in which the signals are modeled as random processes rather than as individual sequences. In this paper, the second approach is studied. In particular, we consider a model of the form Y = HX + W , where each comportment of X is given by X i = S i U i , where { U i } are independent and identically distributed (i.i.d.) Gaussian random variables, { S i } are binary i.i.d. random variables independent of { U i } , H ź R k × n is a random matrix with i.i.d. entries, and W is white Gaussian noise. Using a direct relationship between optimum estimation and certain partition functions, and by invoking methods from statistical mechanics and from random matrix theory (RMT), we derive an asymptotic formula for the minimum mean-square error (MMSE) of estimating the input vector X given Y and H , as k , n ź ∞ , keeping the measurement rate, R = k / n , fixed. In contrast to previous derivations, which are based on the replica method, the analysis carried out in this paper is rigorous. HighlightsThe fundamental limit of compressed sensing for i.i.d. signals is considered.The asymptotic minimum mean-square error is derived rigorously.Heuristic results predicted by the replica method are justified.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call