Abstract

Background: Systematic reviews underpin clinical practice and policies that guide healthcare decisions. A core component of many systematic reviews is meta-analysis, which is a statistical synthesis of results across studies. Errors in the conduct and interpretation of meta-analysis can lead to incorrect conclusions regarding the benefits and harms of interventions; and studies have shown that these errors are common. Enabling peer reviewers to better detect errors in meta-analysis through the use of a checklist provides an opportunity for these errors to be rectified before publication. To our knowledge, no such checklist exists. Objective: To develop and evaluate a checklist to detect errors in pairwise meta-analyses in systematic reviews of interventions. Methods: We will undertake a four-step process to develop the checklist. First, we will undertake a systematic review of studies that have evaluated errors in the conduct and interpretation of meta-analysis to generate a bank of items to consider for the checklist. Second, we will undertake a survey of systematic review methodologists and statisticians to seek their views on which items, of the bank of items generated in step 1, are most important to include in the checklist. Third, we will hold a virtual meeting to agree upon which items to include in the checklist. Fourth, before finalising the checklist, we will pilot with editors and peer reviewers of journals. Conclusion: The developed checklist is intended to help journal editors and peer reviewers identify errors in the application and interpretation of meta-analyses in systematic reviews. Fewer errors in the conduct and improved interpretation will lead to more accurate review findings and conclusions to inform clinical practice.

Highlights

  • Systematic reviews (SRs) frequently underpin clinical practice guidelines and policies that guide healthcare decisions

  • When meta-analysing continuous outcomes, calculations may be incorrect if standard errors are confused with standard deviations

  • When data are included from multi-arm trials, there is the risk that participants might be counted more than once when multiple comparisons from these trials are eligible for inclusion in the same meta-analysis

Read more

Summary

Introduction

Systematic reviews (SRs) frequently underpin clinical practice guidelines and policies that guide healthcare decisions. When dealing with non-standard randomized trials – such as crossover trials, cluster-randomized trials, or split-body trials – there is a risk that variances of the effect estimates in the metaanalysis do not appropriately account for the correlation in observations induced by these designs.[2,3,4,5] Such errors can lead to studies receiving the incorrect weight in the meta-analysis with potential consequent impact on the combined estimate of intervention effect and its confidence interval, and other statistics, such as the estimated heterogeneity variance and measures of inconsistency In some circumstances, these errors will lead to a different interpretation of the findings and review conclusions.[6]. Our focus will be on errors where it can be reasonably expected that a trained meta-analyst should have or could have known better, recognising that there is subjectivity in making this determination.[21]

Methods
Eligibility criteria
Search methods
Selection of studies
Data collection
Conclusion
16. Rennie D
Findings
23. Sumsion T
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call