Abstract

We study the problem of extracting random bits from weak sources that are sampled by algorithms with limited memory. This model of small-space sources was introduced by Kamp, Rao, Vadhan and Zuckerman (STOC'06), and falls into a line of research initiated by Trevisan and Vadhan (FOCS'00) on extracting randomness from weak sources that are sampled by computationally bounded algorithms. Our main results are the following. 1) We obtain near-optimal extractors for small-space sources in the polynomial error regime. For space <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$s$</tex> sources over <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$n$</tex> bits, our extractors require just <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$k\geq s. \text{polylog} (n)$</tex> entropy. This is an exponential improvement over the previous best result, which required entropy <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$k\geq s^{1,1}\cdot 2^{\log^{0.51}n}$</tex> (Chattopadhyay and Li, STOC'16). 2) We obtain improved extractors for small-space sources in the negligible error regime. For space <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$s$</tex> sources over <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$n$</tex> bits, our extractors require entropy <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$k &gt; n^{1/2+\delta}\cdot s^{1/2-\delta}$</tex> , whereas the previous best result required <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$k &gt; n^{2/3+\delta}\cdot s^{1/3-\delta}$</tex> (Chattopadhyay, Goodman, Goyal and Li, STOC'20). To obtain our first result, the key ingredient is a new reduction from small-space sources to affine sources, allowing us to simply apply a good affine extractor. To obtain our second result, we must develop some new machinery, since we do not have low-error affine extractors that work for low entropy. Our main tool is a significantly improved extractor for adversarial sources, which is built via a simple framework that makes novel use of a certain kind of leakage-resilient extractors (known as cylinder intersection extractors), by combining them with a general type of extremal designs. Our key ingredient is the first derandomization of these designs, which we obtain using new connections to coding theory and additive combinatorics.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.