Abstract

To assess the reproducibility of Naranjo Adverse Drug Reaction Probability Scale (APS) scores in published case reports. Reliability analysis. Randomly selected case reports using the APS were identified from the Web of Science database. The APS scores were blinded from the case reports, and scores were then independently calculated by four raters, using the APS. The percentage of exact agreement between raters' and the published APS scores was calculated for all case reports. Categorical scores were compared by using a weighted κ statistic. For numerical scores, descriptive statistics were computed by using raw and absolute difference scores. Twenty-four case reports were independently scored by four raters. Exact agreement between all raters' scores and the published APS scores was found in five (21%) of the 24 reports. Agreement between individual rater's scores and the published categorical score ranged from 42% to 79%. Weighted κ ranged from 0.12 to 0.61, corresponding to strengths of agreement between poor and good. Difference in scoring by raters resulted in 18% and 27% of case reports being reclassified into higher and lower than reported APS categories, respectively. Exact agreement between raters' scores and the published APS score was infrequent. We recommend that authors of case reports include all pertinent details of the case and that journals ensure the robustness of the causality assessment during peer review.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call