Abstract

The Central Limit Theorem is one of the most impressive achievements of probability theory. From a simple description requiring minimal hypotheses, we are able to deduce precise results. The Central Limit Theorem thus serves as the basis for much of Statistical Theory. The idea is simple: let X1,…., Xj,…. be a sequence of i.i.d. random variables with finite variance. Let S n = ∑ nj=1 . Then for n large, L(S n ) ≈ N(nμ, nΩ2), where E{Xj| = μ and Ω2 = Var(X j ) (all j). The key observation is that absolutely nothing (except a finite variance) is assumed about the distribution of the random variables (X j ) j ≥-1. Therefore, if one can assume that a random variable in question is the sum of many i.i.d. random variables with finite variances, that one can infer that the random variable's distribution is approximately Gaussian. Next one can use data and do Statistical Tests to estimate μ and Ω2, and then one knows essentially everythingKeywordsCentral LimitCentral Limit TheoremSimple DescriptionEmpirical Distribution FunctionFinite VarianceThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.