Abstract

Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0–10% or 11–20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation.

Highlights

  • There is growing consideration of the possibility that many published research findings may be false, indicate effects that are in the opposite direction to any true effect, or indicate effects2017 The Authors

  • We included 660 meta-analyses reporting a relationship between a risk factor or other parameter and a disease outcome in one of the three domains of interest

  • This pattern is consistent with the distribution previously reported by Button et al [2]. These results are shown in figure 1. Restricting this analysis to only those meta-analyses that indicated a statistically significant pooled effect size estimate, we found that median statistical power was higher: 38% for somatic diseases, 46% for psychiatric disorders and 50% for neurological diseases

Read more

Summary

Introduction

There is growing consideration of the possibility that many published research findings may be false, indicate effects that are in the opposite direction to any true effect, or indicate effects2017 The Authors. There is growing consideration of the possibility that many published research findings may be false, indicate effects that are in the opposite direction to any true effect, or indicate effects. That are substantially inflated compared to any true effect [1] All of these will increase the likelihood 2 that research findings will prove difficult to replicate. Low statistical power (arising, for example, from low sample size of studies, small effects being investigated, or both) adversely impacts on the likelihood that a statistically significant finding reflects a true effect and (if the effect is real) increases the likelihood that the estimate of the magnitude of that effect is inflated ( known as a type M or magnitude error) or in the opposite direction (a type S or sign error) relative to the true effect [6]

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call