Abstract

A simple averaging argument shows that given a randomized algorithm A and a function f such that for every input x, Pr[A(x) = f(x)] ≥ 1 − ρ (where the probability is over the coin tosses of A), there exists a non-uniform deterministic algorithm B “of roughly the same complexity” such that Pr[B(x) = f(x)] ≥ 1 − ρ (where the probability is over a uniformly chosen input x). This implication is often referred to as “the easy direction of Yao’s lemma” and can be thought of as “weak derandomization” in the sense that B is deterministic but only succeeds on most inputs. The implication follows as there exists a fixed value r′ for the random coins of A such that “hardwiring r′ into A” produces a deterministic algorithm B. However, this argument does not give a way to explicitly construct B. In this paper, we consider the task of proving uniform versions of the implication above. That is, how to explicitly construct a deterministic algorithm B when given a randomized algorithm A. We prove such derandomization results for several classes of randomized algorithms. These include randomized communication protocols, randomized decision trees (here we improve a previous result by Zimand), randomized streaming algorithms, and randomized algorithms computed by polynomial-size constant-depth circuits. Our proof uses an approach suggested by Goldreich and Wigderson and “extracts randomness from the input”. We introduce a new type of (seedless) extractors that extract randomness from distributions that are “recognizable” by the given randomized algorithm. We show that such extractors produce randomness that is in some sense not correlated with the input.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call