One of the most important theorems of statistics is the central limit theorem. Roughly, it states that a sum of (stochastically) independent and identically distributed random variables has an approximately normal distribution, with the approximation improving as the number of random variables increases. A precursor of this theorem is the DeMoivre-Laplace theorem, which states that a sum of independent Bernoulli random variables (with common mean) is approximately normal. The latter theorem was first proved in 1718 by DeMoivre and then generalized by Laplace in 1812. The more general result, known as the central limit theorem, is attributed to Lindeberg (1922). For additional information regarding these theorems and the terms used in this article, the reader may consult any text in mathematical statistics (see, for example, Hoel, Port, and Stone [1]). A Bernoulli random variable is one that takes on only two values, say, 1 with probability p, and 0 with probability q = 1 p. The mean of such a Bernoulli random variable is p, and its variance is given by pq. These variables arise in many settings. For example, the values taken on may be male-female, yes-no, off-on, candidate A-candidate B, and so on. The DeMoivre-Laplace theorem is formally stated below.