Abstract

BackgroundStandard practice for conducting systematic reviews (SRs) is time consuming and involves the study team screening hundreds or thousands of citations. As the volume of medical literature grows, the citation set sizes and corresponding screening efforts increase. While larger team size and alternate screening methods have the potential to reduce workload and decrease SR completion times, it is unknown whether investigators adapt team size or methods in response to citation set sizes. Using a cross-sectional design, we sought to understand how citation set size impacts (1) the total number of authors or individuals contributing to screening and (2) screening methods.MethodsMEDLINE was searched in April 2019 for SRs on any health topic. A total of 1880 unique publications were identified and sorted into five citation set size categories (after deduplication): < 1,000, 1,001–2,500, 2,501–5,000, 5,001–10,000, and > 10,000. A random sample of 259 SRs were selected (~ 50 per category) for data extraction and analysis.ResultsWith the exception of the pairwise t test comparing the under 1000 and over 10,000 categories (median 5 vs. 6, p = 0.049) no statistically significant relationship was evident between author number and citation set size. While visual inspection was suggestive, statistical testing did not consistently identify a relationship between citation set size and number of screeners (title-abstract, full text) or data extractors. However, logistic regression identified investigators were significantly more likely to deviate from gold-standard screening methods (i.e. independent duplicate screening) with larger citation sets. For every doubling of citation size, the odds of using gold-standard screening decreased by 15 and 20% at title-abstract and full text review, respectively. Finally, few SRs reported using crowdsourcing (n = 2) or computer-assisted screening (n = 1).ConclusionsLarge citation set sizes present a challenge to SR teams, especially when faced with time-sensitive health policy questions. Our study suggests that with increasing citation set size, authors are less likely to adhere to gold-standard screening methods. It is possible that adjunct screening methods, such as crowdsourcing (large team) and computer-assisted technologies, may provide a viable solution for authors to complete their SRs in a timely manner.

Highlights

  • Systematic reviews (SRs) are often placed at the top of the evidence pyramid due to their systematic methods and consideration of the entire body of evidence on a topic [1]

  • Given the small number of SRs that used nonstandard screening methods

  • Upon more detailed review during data extraction, nine of the 250 randomly selected SRs were reclassified into different citation set size categories

Read more

Summary

Introduction

Systematic reviews (SRs) are often placed at the top of the evidence pyramid due to their systematic methods and consideration of the entire body of evidence on a topic [1]. The standard practice of relying on small teams of individuals to perform time-consuming tasks, such as screening thousands of abstracts, retrieving and reviewing hundreds of full-text articles, and data extraction, often leads to considerable delays between study initiation and completion [2] This issue is further compounded by the recent exponential growth in scientific literature [3], resulting in a larger number of citations at each stage of the SR process and a higher workload for each team member. Approaches intended to increase efficiency include computer-assisted screening (natural language processing or machine learning) [10,11,12,13], screening by a single reviewer [9], or screening of the title without the abstract [8] While these methods can reduce the workload per reviewer or decrease the time to SR completion, there are concerns these methodological approaches may compromise SR quality [9, 15,16,17]. Using a cross-sectional design, we sought to understand how citation set size impacts (1) the total number of authors or individuals contributing to screening and (2) screening methods

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call