Abstract

Background and contextSystematic reviews aim to provide high-quality evidence-based syntheses for efficacy under real-world conditions and allow understanding the correlations between exposures and outcomes. They are increasingly popular and have several stakeholders (e.g., healthcare providers, researchers, educators, students, journal editors, policy makers, managers) to whom they help make informed recommendations for practice or policy. ProblemSystematic reviews usually exhibit low methodological and reporting quality. To tackle this, reporting guidelines have been developed to support systematic reviews reporting and assessment. Following such guidelines is crucial to ensure that a review is transparent, complete, trustworthy, reproducible, and unbiased. However, systematic reviewers usually fail to adhere to existing reporting guidelines, which may significantly decrease the quality of the reviews they report and may result in systematic reviews that lack methodological rigor, yield low-credible findings and may mislead decision-makers. MethodsTo assure that a review complies with reporting guidelines, we rely on assurance cases that are an emerging way of arguing and relaying various safety–critical systems’ requirements in an extensive manner, as well as checking the compliance of such systems with standards to support their certification. Since the nature of assurance cases makes them applicable to various domains and requirements/properties, we therefore propose a new type of assurance cases called systematicity cases. Systematicity cases focus on the systematicity property and allow arguing that a review is systematic i.e., that it sufficiently complies with the targeted reporting guideline. The most widespread reporting guidelines include PRISMA (Preferred Reporting Items for Systematic reviews and meta-Analyses). We measure the confidence in a systematicity case representing a review as a means to quantify the systematicity of that review i.e., the extent to which that review is systematic. We rely on rule-based Artificial Intelligence to create a knowledge-based system that automatically supports the inference mechanism that a given systematicity case embodies and that allows making a decision regarding the systematicity of a given review. ResultsAn empirical evaluation performed on 25 reviews (self-identifying as systematic) showed that these reviews exhibit a suboptimal systematicity. More specifically, the systematicity of the analyzed reviews varies between 32.96% and 66.49% and its average is 54.42%. More efforts are therefore needed to report systematic reviews of higher quality. More experiments are also needed to further explore the factors hindering and/or assuring the systematicity of reviews. AudienceThe main beneficiaries of our work are journal reviewers, journal editors, managers, policymakers, researchers, organizations developing reporting guidelines, peer reviewers, students, insurers, evidence users, as well as reporting guidelines developers.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.