Abstract

In this preregistered study, we investigated whether the statistical power of a study is higher when researchers are asked to make a formal power analysis before collecting data. We compared the sample size descriptions from two sources: (i) a sample of pre-registrations created according to the guidelines for the Center for Open Science Preregistration Challenge (PCRs) and a sample of institutional review board (IRB) proposals from Tilburg School of Behavior and Social Sciences, which both include a recommendation to do a formal power analysis, and (ii) a sample of pre-registrations created according to the guidelines for Open Science Framework Standard Pre-Data Collection Registrations (SPRs) in which no guidance on sample size planning is given. We found that PCRs and IRBs (72%) more often included sample size decisions based on power analyses than the SPRs (45%). However, this did not result in larger planned sample sizes. The determined sample size of the PCRs and IRB proposals (Md = 90.50) was not higher than the determined sample size of the SPRs (Md = 126.00; W = 3389.5, p = 0.936). Typically, power analyses in the registrations were conducted with G*power, assuming a medium effect size, α = .05 and a power of .80. Only 20% of the power analyses contained enough information to fully reproduce the results and only 62% of these power analyses pertained to the main hypothesis test in the pre-registration. Therefore, we see ample room for improvements in the quality of the registrations and we offer several recommendations to do so.

Highlights

  • Many studies in the psychological literature are underpowered [1,2,3,4]

  • In our preregistered study we investigated whether more power analyses are reported when the guidelines of a pre-registration or institutional review board (IRB) proposal recommends doing a power analysis and whether power analyses were associated with larger intended sample sizes

  • We found support for our first hypothesis and found that the PCRs and IRB proposals reported more power based sample size decisions than the Standard Pre-Data Collection Registrations (SPRs)

Read more

Summary

Introduction

Many studies in the psychological literature are underpowered [1,2,3,4]. In light of the typical effect sizes and sample sizes seen in the literature, statistical power of psychological studies is estimated to be around .50 [1] or even .35 [2, 3].

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call