Abstract

Conventional practice is to draw inferences from all available data and research results. When a scientific literature is plagued by publication selection bias, a simple discarding of the vast majority of empirical results can actually improve statistical inference and estimation. Simulations demonstrate that, if statistical significance is used as a criterion for reporting or publishing estimates, discarding 90% of the published findings greatly reduces publication selection bias and is often more efficient than conventional summary statistics. Improving statistical estimation and inference through removing so much data goes against statistical theory and practice; hence, it is paradoxical. We investigate a very simple method to reduce the effects of publication bias and to improve the efficiency of summary estimates of accumulated empirical research results that averages the most precise 10% of the reported estimates (i.e., ‘Top10’). In the process, the critical importance of precision (the inverse of an estimate’s standard error) as a measure of a study’s quality is brought to light. Reviewers and journal editors should use precision, when possible, as one objective measure of a study’s quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call