Abstract

BackgroundThe increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias has been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Until recently, outcome reporting bias has received less attention.Methodology/Principal FindingsWe review and summarise the evidence from a series of cohort studies that have assessed study publication bias and outcome reporting bias in randomised controlled trials. Sixteen studies were eligible of which only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Eleven of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40–62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies.ConclusionsRecent work provides direct empirical evidence for the existence of study publication bias and outcome reporting bias. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.

Highlights

  • Study publication bias arises when studies are published or not depending on their results; it has received much attention [1,2]

  • Study publication bias will lead to overestimation of treatment effects; it has been recognised as a threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making

  • Within-study selective reporting bias relates to studies that have been published

Read more

Summary

Introduction

Study publication bias arises when studies are published or not depending on their results; it has received much attention [1,2]. Study publication bias will lead to overestimation of treatment effects; it has been recognised as a threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. There is additional evidence that research without statistically significant results takes longer to achieve publication than research with significant results, further biasing evidence over time [4,5,6,29] This ‘‘time lag bias’’ (or ‘‘pipeline bias’’) will tend to add to the bias since results from early available evidence tend to be inflated and exaggerated [7,8]. Study publication bias has been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making.

Methods
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.