Abstract

Systematic reviews are indispensable tools for both reliably informing decision-makers about the state of the field and for identifying areas that need further study. Their value, however, depends on their transparency and reproducibility. Readers should be able to determine what was searched for and when, where the authors searched, and whether that search was predetermined or evolved based on what was found. In this article, we measured the transparency and reproducibility of systematic reviews in forensic science, a field where courts, policymakers, and legislators count on systematic reviews to make informed decisions. In a sample of 100 systematic reviews published between 2018 and 2021, we found that completeness of reporting varied markedly. For instance, 50 % of reviews claimed to follow a reporting guideline and such statements were only modestly related to compliance with that reporting guideline. As to specific reporting items, 82 % reported all of the databases searched, 22 % reported the review’s full Boolean search logic, and just 7 % reported the review was registered. Among meta-analyses (n = 23), only one stated data was available and none stated the analytic code was available. After considering the results, we end with recommendations for improved regulation of reporting practices, especially among journals. Our results may serve as a useful benchmark as the field evolves.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.