Abstract

We provide two results concerning the optimality of the stochastic-mutual information (SMI) decoder, which chooses the estimated message according to a posterior probability mass function, which is proportional to the exponentiated empirical mutual information induced by the channel output sequence and the different codewords. First, we prove that the error exponents of the typical random codes under the optimal maximum likelihood (ML) decoder and the SMI decoder are equal. As a corollary to this result, we also show that the error exponents of the expurgated codes under the ML and the SMI decoders are equal. These results strengthen the well-known result due to Csiszár and Körner, according to which, the ML and the maximum-mutual information (MMI) decoders achieve equal random-coding error exponents, since the error exponents of the typical random code and the expurgated code are strictly higher than the random-coding error exponents, at least at low coding rates. The universal optimality of the SMI decoder, in the random-coding error exponent sense, is easily proven by commuting the expectation over the channel noise and the expectation over the ensemble. This commutation can no longer be carried out, when it comes to typical and expurgated exponents. Therefore, the proof of the universal optimality of the SMI decoder must be completely different and it turns out to be highly non-trivial.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call