Abstract

BackgroundUtilisation of crowdsourcing within evidence synthesis has increased over the last decade. Crowdsourcing platform Cochrane Crowd has engaged a global community of 22,000 people from 170 countries. The COVID‐19 pandemic presented an opportunity to engage the community and keep up with the exponential output of COVID‐19 research.AimsTo test whether a crowd could accurately assess study eligibility for reviews under time constraints. Outcome measures: time taken to complete each task, time to produce required training modules, crowd sensitivity, specificity and crowd consensus.MethodsWe created four crowd tasks, corresponding to four Cochrane COVID‐19 Rapid Reviews. The search results of each were uploaded and an interactive training module was developed for each task. Contributors who had participated in another COVID‐19 task were invited to participate. Each task was live for 48‐h. The final inclusion and exclusion decisions made by the core author team were used as the reference standard.ResultsAcross all four reviews 14,299 records were screened by 101 crowd contributors. The crowd completed each screening task within 48‐h for three reviews and in 52 h for one. Sensitivity ranged from 94% to 100%. Four studies, out of a total of 109, were incorrectly rejected by the crowd. However, their absence ultimately would not have altered the conclusions of the reviews. Crowd consensus ranged from 71% to 92% across the four reviews.ConclusionCrowdsourcing can play a valuable role in study identification and offers willing contributors the opportunity to help identify COVID‐19 research for rapid evidence syntheses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call