Abstract

Many randomized algorithms run successfully even when the random choices they utilize are not fully independent. For the analysis some limited amount of independence, like k-wise independence for some fixed k, often suffices. In these cases, it is possible to replace the appropriate exponentially large sample spaces required to simulate all random choices of the algorithms by ones of polynomial size. This enables one to derandomize the algorithms, that is, convert them into deterministic ones, by searching the relatively small sample spaces deterministically. If a random variable attains a certain value with positive probability, then we can actually search and find a point in which it attains such a value.The observation that n − 1 pairwise independent nontrivial random variables can be defined over a sample space of size n has been mentioned already long ago, see [11], [23]. The pairwise independent case has been a crucial ingredient in the construction of efficient hashing schemes in [14], [17]. A more general construction, of small sample spaces supporting k-wise independent random variables, appeared in [19]. For the case of binary, uniform random variables this is treated under the name orthogonal arrays in the Coding Theory literature, see, e.g., [27]. Most constructions are based on some simple properties of polynomials over a finite field or on certain explicit error correcting codes.Several researchers realized that constructions of this type are useful for derandomizing parallel algorithms, since one may simply check all points of the sample space in parallel. Papers pursuing this idea include [1], [22], [24], and papers dealing with the properties of the constructions in which the sample spaces are not necessarily uniform include [20], [21]. It can be shown that for fixed k, the minimum size of a sample space supporting n k-wise independent random variables is Ω(n ⌊k/2⌋. For the binary uniform case this is essentially the Rao bound [30] (see also [12], [16]), whereas for the general case it is shown inThe above techniques have been applied in numerous papers dealing with derandomization, and we make no attempt to list all of them here. Examples include parallelization of derandomized geometric algorithms in [10], [18], and various parallel graph algorithms [1], [9], [22], [24], [28]. It turned out that some variants of the techniques are also useful in derandomizing sequential algorithmsIn the talk I will survey the basic ideas in the constructions of small sample spaces and discuss some of the applications, focusing on various recent results that illustrate the somewhat surprising relevance of the techniques to the solutions of several algorithmic problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call