Abstract

For the additive white Gaussian noise channel with average power constraint, sparse superposition codes (or sparse regression codes), proposed by Barron and Joseph in 2010, achieve the capacity. While the codewords of the original sparse superposition codes are made with a dictionary matrix drawn from a Gaussian distribution, we consider the case that it is drawn from a Bernoulli distribution. We show an improved upper bound on its block error probability with least squares decoding, which is fairly simplified and tighter bound than our previous result in 2014.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call