Abstract
BackgroundOur objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results.MethodsWe performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second.ResultsThe analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates.ConclusionThe evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.
Highlights
Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews
We evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results
There is a high prevalence of extraction errors [8, 13, 15, 16]
Summary
Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. We evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. Systematic reviews (SRs) have become the cornerstone of evidence based healthcare. A SR should use explicit methods to minimize bias with the aim to provide more reliable findings [1]. Bias can occur in the identification of studies, in the selection of studies (e.g. unclear inclusion criteria), in the data collection process and in the validity assessment of included studies [2]. Many efforts have been made to further develop methods for SRs. the evidence base for most recommendations that aim to minimize bias in the preparation process for a systematic review in established guidelines
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.