Abstract
ObjectivesWe developed a free, online tool (CrowdCARE: crowdcare.unimelb.edu.au) to crowdsource research critical appraisal. The aim was to examine the validity of this approach for assessing the methodological quality of systematic reviews. Study Design and SettingIn this prospective, cross-sectional study, a sample of systematic reviews (N = 71), of heterogeneous quality, was critically appraised using the Assessing the Methodological Quality of Systematic Reviews (AMSTAR) tool, in CrowdCARE, by five trained novice and two expert raters. After performing independent appraisals, experts resolved any disagreements by consensus (to produce an “expert consensus” rating, as the gold-standard approach). ResultsThe expert consensus rating was within ±1 (on an 11-point scale) of the individual expert ratings for 82% of studies and was within ±1 of the mean novice rating for 79% of studies. There was a strong correlation (r2 = 0.89, P < 0.0001) and very good concordance (κ = 0.67, 95% CI: 0.61–0.73) between the expert consensus rating and mean novice rating. ConclusionCrowdsourcing can be used to assess the quality of systematic reviews. Novices can be trained to appraise systematic reviews and, on average, achieve a high degree of accuracy relative to experts. These proof-of-concept data demonstrate the merit of crowdsourcing, compared with current gold standards of appraisal, and the potential capacity for this approach to transform evidence-based practice worldwide by sharing the appraisal load.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have