Abstract

AbstractAgreement studies, where several observers may be rating the same subject for some characteristic measured on an ordinal scale, provide important information. The weighted Kappa coefficient is a popular measure of agreement for ordinal ratings. However, in some studies, the raters use scales with different numbers of categories. For example, a patient quality of life questionnaire may ask ‘How do you feel today?’ with possible answers ranging from 1 (worst) to 7 (best). At the same visit, the doctor reports his impression of the patient’s health status as very poor, poor, fair, good, or very good. The weighted Kappa coefficient is not applicable here because the two scales have a different number of categories. In this paper, we discuss Kappa coefficients to measure agreement between such ratings. In particular, with R categories of one rating, and C categories of another, by dichotomizing the two ratings at all possible cutpoints, there are (R−1)(C−1) possible (2×2) tables. For each of these (2×2) tables, we estimate the Kappa coefficient for dichotomous ratings. The largest estimated Kappa coefficients suggest the cutpoints for the two ratings where agreement is the highest and where categories can be combined for further analysis.Keywords and phrasesMeasure of agreementKappa coefficientordinal data

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call