Abstract

Objective: To address the limitations of existing methods and tools for evaluating clinical practice guidelines, we aimed to develop a comprehensive instrument focusing on the three main dimensions of guideline development: scientificity, transparency, applicability. We will use it to rank the guidelines according to the scores. We abbreviated it as STAR, and its reliability, validity and usability were also tested. Methods: A multidisciplinary expert working group was set up, including methodologists, statisticians, journal editors, medical professionals, and others. Scoping review, Delphi methods and hierarchical analysis were used to determine the final checklist of STAR. Results: The new instrument contained 11 domains and 39 items. Intrinsic reliability of each domain was indicated by Cronbach's α coefficient, with a average value of 0.646. The Cohen's kappa coefficients for methodological evaluators and clinical evaluators were 0.783 and 0.618. The overall content validity index was 0.905. The R2 for the criterion validity analysis was 0.76. The average score for usability of the items was 4.6, and the mean time spent to evaluate each guideline was 20 minutes. Conclusion: The instrument has good reliability, validity and evaluating efficiency, and can be used for evaluating and ranking guidelines more comprehensively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call