Abstract

Rényi-type generalizations of entropy, relative entropy and mutual information have found numerous applications throughout information theory and beyond. While there is consensus that the ways A. Rényi generalized entropy and relative entropy in 1961 are the “right” ones, several candidates have been put forth as possible mutual informations of order . In this paper we lend further evidence to the notion that a Bayesian measure of statistical distinctness introduced by R. Sibson in 1969 (closely related to Gallager’s function) is the most natural generalization, lending itself to explicit computation and maximization, as well as closed-form formulas. This paper considers general (not necessarily discrete) alphabets and extends the major analytical results on the saddle-point and saddle-level of the conditional relative entropy to the conditional Rényi divergence. Several examples illustrate the main application of these results, namely, the maximization of -mutual information with and without constraints.

Highlights

  • IntroductionThe Rényi divergence of order α between two probability measures defined on the same measurable space, α

  • The Rényi divergence of order α between two probability measures defined on the same measurable space, α Z dP Dα ( P k Q ) = log (x) dQ( x ), (1) α−1dQ is a useful generalization of the relative entropy D ( Pk Q) introduced by Rényi [1] in the discrete case (limα↑1 Dα ( P k Q) = D ( Pk Q))

  • To explore the generalization that we study in this paper, namely α-mutual information, we need to consider the conditional versions of relative entropy and Rényi divergence

Read more

Summary

Introduction

The Rényi divergence of order α between two probability measures defined on the same measurable space, α. To explore the generalization that we study in this paper, namely α-mutual information, we need to consider the conditional versions of relative entropy and Rényi divergence These are defined in general for two random transformations PY | X and QY | X and an unconditional probability measure PX as. The main purpose of this paper is to generalize the saddle-point property of conditional relative entropy and its applications to the maximization of mutual information when relative entropy is replaced by Rényi divergence. The fact that a saddle-level exists (i.e., sup min commute) even if there is no input probability measure that achieves the supremum α-mutual information is established, thereby generalizing Kemperman’s [19] saddle-level result to Rényi divergence through a different route than that followed in [26].

Conditional Rényi Divergence Game
Saddle point
Minimax identity
Finding Cα
Proof of Theorem 1
Proof of Theorem 2
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call