Abstract
ObjectiveTo simulate possible changes in systematic review results if rapid review methods were used. Study Design and SettingWe recalculated meta-analyses for binary primary outcomes in Cochrane systematic reviews, simulating rapid review methods. We simulated searching only PubMed, excluding older articles (5, 7, 10, 15, and 20 years before the search date), excluding smaller trials (<50, <100, and <200 participants), and using the largest trial only. We examined percentage changes in pooled odds ratios (ORs) (classed as no important change [<5%], small [<20%], moderate [<30%], or large [≥30%]), statistical significance, and biases observed using rapid methods. ResultsTwo thousand five hundred and twelve systematic reviews (16,088 studies) were included. Rapid methods resulted in the loss of all data in 3.7–44.7% of meta-analyses. Searching only PubMed had the smallest risk of changed ORs (19% [477/2,512] were small changes or greater; 10% [260/2,512] were moderate or greater). Changes in ORs varied substantially with each rapid review method; 8.4–21.3% were small, 1.9–8.8% were moderate, and 4.7–34.1% were large. Changes in statistical significance occurred in 6.5–38.6% of meta-analyses. Changes from significant to nonsignificant were most common (2.1–13.7% meta-analyses). We found no evidence of bias with any rapid review method. ConclusionSearching PubMed only might be considered where a ∼10% risk of the primary outcome OR changing by >20% could be tolerated. This could be the case in scoping reviews, resource limitation, or where syntheses are needed urgently. Other situations, such as clinical guidelines and regulatory decisions, favor more comprehensive systematic review methods.
Highlights
Systematic reviews are regarded as the gold standard method for evidence synthesis but are time-consuming and laborious to produce
What is the implication and what should change now? PubMed-only searching might be considered in situations where a 10% risk of !20% change in odds ratio for the primary outcome is tolerable. This might be the case for scoping reviews, where there is resource limitation, or where a synthesis is needed urgently
Systematic reviews are quickly outdated after publication [4] and resource limitation is an important reason why they are not kept up-to-date [5]. ‘‘Rapid’’ syntheses take methodological shortcuts and have become popular where syntheses are needed to tight deadlines, or where a conventional systematic review would be prohibitively expensive
Summary
Systematic reviews are regarded as the gold standard method for evidence synthesis but are time-consuming and laborious to produce. Common rapid methods include limiting searches to a single database, limiting to English language, and limiting by publication date, among many others By design these methods risk missing some studies but aim to produce results similar enough to those from more exhaustive systematic reviews to be useful. Wagner et al conducted an online questionnaire of 556 guideline developers and policy makers, aiming to find what risk of error would be acceptable in rapid reviews [14]. They found that participants demanded very high levels of accuracy and would tolerate a median risk of a wrong answer of 10% (interquartile range [IQR] 5e15%). This was similar across public health, pharmaceutical treatment, and prevention topics
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have