Abstract

Typical random codes (TRCs) in a communication scenario of source coding with side information in the decoder is the main subject of this work. We study the semi-deterministic code ensemble, which is a certain variant of the ordinary random binning code ensemble. In this code ensemble, the relatively small type classes of the source are deterministically partitioned into the available bins in a one-to-one manner. As a consequence, the error probability decreases dramatically. The random binning error exponent and the error exponent of the TRCs are derived and proved to be equal to one another in a few important special cases. We show that the performance under optimal decoding can be attained also by certain universal decoders, e.g., the stochastic likelihood decoder with an empirical entropy metric. Moreover, we discuss the trade-offs between the error exponent and the excess-rate exponent for the typical random semi-deterministic code and characterize its optimal rate function. We show that for any pair of correlated information sources, both error and excess-rate probabilities exponential vanish when the blocklength tends to infinity.

Highlights

  • As is well known, the random coding error exponent is defined by Er(R) = lim n→∞ − 1 n log E[ Pe (Cn )] (1)where R is the coding rate, Pe(Cn) is the error probability of a codebook Cn, and the expectation is with respect to (w.r.t.) the randomness of Cn across the ensemble of codes

  • We prove in Theorem 4 that the error exponent of the Typical random codes (TRCs) under MAP decoding is attained by two universal decoders: the minimum entropy decoder and the stochastic entropy decoder, which is a generalized likelihood decoder (GLD) with an empirical conditional entropy metric

  • As far as we know, this result is first of its kind in source coding; in other scenarios, the random coding bound is attained by universal decoders, but here, we find that the TRC exponent is universally achievable

Read more

Summary

Introduction

Where R is the coding rate, Pe(Cn) is the error probability of a codebook Cn, and the expectation is with respect to (w.r.t.) the randomness of Cn across the ensemble of codes. 3. We discuss the trade-offs between the error exponent and the excess-rate exponent for a typical random SD code, to [13], but with a different notion of the excessrate event, which takes into account the available side information. In Theorem 5, we provide an expression for the optimal rate function that guarantees a required level for the error exponent of the typical random SD code.

Notation Conventions
Problem Formulation
Background
Error Exponents and Universal Decoding
Optimal Trade-off Functions
Upper Bound on the Error Exponent
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call