Abstract

Two correlated sources emit a pair of sequences, each of which is observed by a different encoder. Each encoder produces a rate-limited description of the sequence it observes, and the two descriptions are presented to a guessing device that repeatedly produces sequence pairs until correct. The number of guesses until correct is random, and it is required that it have a moment (of some prespecified order) that tends to one as the length of the sequences tends to infinity. The description rate pairs that allow this are characterized in terms of the Rényi entropy and the Arimoto–Rényi conditional entropy of the joint law of the sources. This solves the guessing analog of the Slepian–Wolf distributed source-coding problem. The achievability is based on random binning, which is analyzed using a technique by Rosenthal.

Highlights

  • In the Massey–Arıkan guessing problem [1,2], a random variable X is drawn from a finite setX according to some probability mass function (PMF) PX, and it has to be determined by making guesses of the form “Is X equal to x?” until the guess is correct

  • The guessing order is determined by a guessing function G, which is a bijective function from X to {1, . . . , |X |}

  • Arıkan [2] showed that for any ρ > 0, the ρth moment of the number of guesses required by an optimal guesser G ∗ to guess X is bounded by: 2ρH1/(1+ρ) (X ) ≤ E[ G ∗ ( X )ρ ] ≤ 2ρH1/(1+ρ) (X ), (1 + ln |X |)ρ where ln(·) denotes the natural logarithm, and H1/(1+ρ) ( X ) denotes the Rényi entropy of order 1+1 ρ, which is defined in Section 3 ahead (refinements of (1) were recently derived in [3])

Read more

Summary

Introduction

In the Massey–Arıkan guessing problem [1,2], a random variable X is drawn from a finite set. In the IID case, the task-encoding region is the set of all rate pairs (RX , RY ) ∈ R2≥0 satisfying the following inequalities [9] (Theorem 1): RX ≥ Hρ ( X ),. The rest of this paper is structured as follows: in Section 2, we review other guessing settings; in Section 3, we recall the Rényi information measures and prove some auxiliary lemmas; in Section 4, we prove the converse theorem; and, we prove the achievability theorem, which is based on random binning and, in the case ρ > 1, is analyzed using a technique by Rosenthal [11] The rest of this paper is structured as follows: in Section 2, we review other guessing settings; in Section 3, we recall the Rényi information measures and prove some auxiliary lemmas; in Section 4, we prove the converse theorem; and in Section 5, we prove the achievability theorem, which is based on random binning and, in the case ρ > 1, is analyzed using a technique by Rosenthal [11]

Related Work
Preliminaries
Converse
Achievability
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call