Abstract

The theory of statistical inference clearly describes the benefits of large samples. The larger the sample size, the fewer standard errors of the estimated population parameters (the precision of the estimation improves) and the values of the power of statistical tests in hypothesis testing increase. Today’s easy access not only to large samples (e.g. web panels) but also to more advanced and user-friendly statistical software may obscure the potential threats faced by statistical inference based on large samples. Some researchers seem to be under the illusion that large samples can reduce both random errors, typical for any sampling technique, as well as non-random errors. Additionally, the role of a large sample size is an important aspect of the much discussed in the recent years issue of statistical significance (p-value) and the problems related to its determination and interpretation. The aim of the paper is to present and discuss the consequences of focusing solely on the advantages of large samples and ignoring any threats and challenges they pose to statistical inference. The study shows that a large-size sample collected using one of the non-random sampling techniques cannot be an alternative to random sampling. This particularly applies to online panels of volunteers willing to participate in a survey. The paper also shows that the sampling error may contain a non-random component which should not be regarded as a function of the sample size. As for the contemporary challenges related to testing hypotheses, the study discusses and exemplifies the scientific and ethical aspects of searching for statistical significance using large samples or multiple sampling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call