Abstract

Determining an appropriate sample size is a critical planning decision in quantitative empirical research. In recent years, there has been a growing concern that researchers have excessively focused on statistical significance in large sample studies to the detriment of effect sizes. This research focuses on a related concern at the other end of the spectrum. We argue that a combination of bias in significant estimates obtained from small samples (compared to their population values) and an editorial preference for the publication of significant results compound to produce marked bias in published small sample studies. We then present a simulation study covering a variety of statistical techniques commonly used to examine structural equation models with latent variables. Our results support our contention that significant results obtained from small samples are likely biased and should be considered with skepticism. We also argue for the need to provide a priori power analyses to understand the behavior of parameter estimates under the small sample conditions we examine.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.