Abstract
Over the last six decades, the representation of error exponent functions for data transmission through noisy channels at rates below capacity has seen three distinct approaches: (1) Through Gallager’s functions (with and without cost constraints); (2) large deviations form, in terms of conditional relative entropy and mutual information; (3) through the -mutual information and the Augustin–Csiszár mutual information of order derived from the Rényi divergence. While a fairly complete picture has emerged in the absence of cost constraints, there have remained gaps in the interrelationships between the three approaches in the general case of cost-constrained encoding. Furthermore, no systematic approach has been proposed to solve the attendant optimization problems by exploiting the specific structure of the information functions. This paper closes those gaps and proposes a simple method to maximize Augustin–Csiszár mutual information of order under cost constraints by means of the maximization of the -mutual information subject to an exponential average constraint.
Highlights
The second phase of the error exponent research was pioneered by Haroutunian [22] and Blahut [23], who infused the expressions for the error exponent functions with meaning by incorporating relative entropy
Optimal codes of rate R ă C incur in errors due to atypical channel behavior, and large deviations establishes that the overwhelmingly most likely such behavior can be explained as if the channel would be supplanted by the one with mutual information bounded by R which is closest to the true channel in conditional relative entropy DpQY|X}PY|X|PXq
This includes their variational representations in terms of conventional information measures such as conditional relative entropy and mutual information, which are simple to show in the main range of interest in applications to error exponents, namely, α P p0, 1q
Summary
The capacity C of a stationary memoryless channel is equal to the maximal symbolwise input–output mutual information. (b) Despite the large-deviations nature of the setup, none of the tools from that thennascent field (other than the Chernoff bound) found their way to the first phase of the work on error exponents; in particular, relative entropy, introduced by Kullback and Leibler [14], failed to put in an appearance To this date, the reliability function remains open for low rates even for the binary symmetric channel, despite a number of refined converse and achievability results (e.g., [15,16,17,18,19,20,21]) obtained since [9]. Our focus in this paper is not on converse/achievability techniques but on the role played by various information measures in the formulation of error exponent results
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have