Abstract

@q, with probability arbitrarily close to 1, is list decodable at radius 1--1/q -- e with list size L = O(1/e2) and rate R = Ωq(e2/(log3(1/e))). Up to the polylogarithmic factor in 1/e and constant factors depending on q, this matches the lower bound L = Ωq(1/e2) for the list size and upper bound R = Oq(e2) for the rate. Previously only existence (and not abundance) of such codes was known for the special case q = 2 (Guruswami, Hastad, Sudan and Zuckerman, 2002).In order to obtain our result, we employ a relaxed version of the well known Johnson bound on list decoding that translates the average Hamming distance between codewords to list decoding guarantees. We furthermore prove that the desired average-distance guarantees hold for a code provided that a natural complex matrix encoding the codewords satisfies the Restricted Isometry Property with respect to the Euclidean norm (RIP-2). For the case of random binary linear codes, this matrix coincides with a random submatrix of the Hadamard-Walsh transform matrix that is well studied in the compressed sensing literature.Finally we improve the analysis of Rudelson and Vershynin (2008) on the number of random frequency samples required for exact reconstruction of k-sparse signals of length N. Specifically we improve the number of samples from O(k log (N) log2 (k)(log k+log log N)) to O(k log(N) log3(k)). The proof involves bounding the expected supremum of a related Gaussian process by using an improved analysis of the metric defined by the process. This improvement is crucial for our application in list decoding.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call