Abstract

In the correlated sampling problem, two players are given probability distributions $P$ and $Q$, respectively, over the same finite set, with access to shared randomness. Without any communication, the two players are each required to output an element sampled according to their respective distributions, while trying to minimize the probability that their outputs disagree. A well known strategy due to Kleinberg--Tardos and Holenstein, with a close variant (for a similar problem) due to Broder, solves this task with disagreement probability at most $2 \delta/(1+\delta)$, where $\delta$ is the total variation distance between $P$ and $Q$. This strategy has been used in several different contexts, including sketching algorithms, approximation algorithms based on rounding linear programming relaxations, the study of parallel repetition and cryptography. In this paper, we give a surprisingly simple proof that this strategy is essentially optimal. Specifically, for every $\delta \in (0,1)$, we show that any correlated sampling strategy incurs a disagreement probability of essentially $2\delta/(1+\delta)$ on some inputs $P$ and $Q$ with total variation distance at most $\delta$. This partially answers a recent question of Rivest. Our proof is based on studying a new problem that we call constrained agreement. Here, the two players are given subsets $A \subseteq [n]$ and $B \subseteq [n]$, respectively, and their goal is to output an element $i \in A$ and $j \in B$, respectively, while minimizing the probability that $i \neq j$. We prove tight bounds for this question, which in turn imply tight bounds for correlated sampling. Though we settle basic questions about the two problems, our formulation leads to more fine grained questions that remain open.

Highlights

  • We study correlated sampling, a basic task, variants of which have been considered in the context of sketching algorithms [2], approximation algorithms based on rounding linear programming relaxations [7, 3], the study of parallel repetition [6, 12, 1] and cryptography [13]

  • A correlated sampling strategy is formally defined below, where ∆Ω denotes the set of all probability distributions over Ω and (R, F, μ) denotes the probability space corresponding to the randomness shared by Alice and Bob

  • Note that since the constrained agreement problem is defined with respect to a probability distribution D on pairs of sets, we can require, without loss of generality, that the strategies ( f, g) be deterministic

Read more

Summary

Introduction

We study correlated sampling, a basic task, variants of which have been considered in the context of sketching algorithms [2], approximation algorithms based on rounding linear programming relaxations [7, 3], the study of parallel repetition [6, 12, 1] and cryptography [13]. A correlated sampling strategy for a finite set Ω with error ε : [0, 1] → [0, 1] is specified by a probability space R and a pair of functions f , g : ∆Ω × R → Ω, such that for all P, Q ∈ ∆Ω with dTV(P, Q) ≤ δ , the following hold. There exists a simple strategy whose error can be bounded by roughly twice the total variation distance (and in particular does not degrade with the size of Ω) Variants of this strategy have been rediscovered multiple times in the literature, yielding the following theorem. In this regime, there turns out to be a surprising strategy that gets better error than Theorem 1.2 in a very special case.

Lower bound on correlated sampling
Correlated sampling over a fixed set of finite size
Case of negatively correlated sets
Fine-grained understanding of G-restricted correlated sampling
Correlated sampling for infinite spaces
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call