Abstract
ObjectivesTo review the findings of studies that have evaluated the design and/or usability of key risk of bias (RoB) tools for the assessment of RoB in primary studies, as categorized by the Library of Assessment Tools and InsTruments Used to assess Data validity in Evidence Synthesis Network (a searchable library of RoB tools for evidence synthesis): Prediction model Risk Of Bias ASessment Tool (PROBAST) , Risk of Bias-2 (RoB2), Risk Of Bias In Non-randomised Studies of Interventions (ROBINS-I), Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2), Quality Assessment of Diagnostic Accuracy Studies-Comparative (QUADAS-C), Quality Assessment of Prognostic Accuracy Studies (QUAPAS), Risk Of Bias in Non-randomised Studies of Exposures (ROBINS-E), and the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) RoB checklist. Study Design and SettingSystematic review of methodological studies. We conducted a forward citation search from the primary report of each tool, to identify primary studies that aimed to evaluate the design and/or usability of the tool. Two reviewers assessed studies for inclusion. We extracted tool features into Microsoft Word and used NVivo for document analysis, comprising a mix of deductive and inductive approaches. We summarized findings within each tool and explored common findings across tools. ResultsWe identified 13 tool evaluations meeting our inclusion criteria: PROBAST (3), RoB2 (3), ROBINS-I (4), and QUADAS-2 (3). We identified no evaluations for the other tools. Evaluations varied in clinical topic area, methodology, approach to bias assessment, and tool user background. Some had limitations affecting generalizability. We identified common findings across tools for 6/14 themes: (1) challenging items (eg, RoB2/ROBINS-I “deviations from intended interventions” domain), (2) overall RoB judgment (concerns with overall risk calculation in PROBAST/ROBINS-I), (3) tool usability (concerns about complexity), (4) time to complete tool (varying demands on time, eg, depending on number of outcomes assessed), (5) user agreement (varied across tools), and (6) recommendations for future use (eg, piloting) and development (add intermediate domain answer to QUADAS-2/PROBAST; provide clearer guidance for all tools). Of the other eight themes, seven only had findings for the QUADAS-2 tool, limiting comparison across tools, and one (“reorganization of questions”) had no findings. ConclusionEvaluations of key RoB tools have posited common challenges and recommendations for tool use and development. These findings may be helpful to people who use or develop RoB tools. Guidance is necessary to support the design and implementation of future RoB tool evaluations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.