Abstract

The need to assess agreement exists in various clinical studies where quantifying inter-rater reliability is of great importance. Use of unscaled agreement indices, such as total deviation index and coverage probability (CP), is recommended for two main reasons: (i) they are intuitive in a sense that interpretations are tied to the original measurement unit; (ii) practitioners can readily determine whether the agreement is satisfactory by directly comparing the value of the index to a prespecified tolerable CP or absolute difference. However, the unscaled indices were only defined in the context of comparing two raters or multiple raters that assume homogeneity of variances across raters. In this paper, we introduce a set of overall indices based on the root mean square of pairwise differences that are unscaled and can be used to evaluate agreement among multiple raters that often exhibit heterogeneous measurement processes in practice. Furthermore, we propose another overall agreement index based on the root mean square of pairwise differences that is scaled and extends the concept of the recently proposed relative area under CP curve in the presence of multiple raters. We present the definitions of overall indices and propose inference procedures in which bootstrap methods are used for the estimation of standard errors. We assess the performance of the proposed approach and demonstrate its superiority over the existing methods when raters exhibit heterogeneous measurement processes using simulation studies. Finally, we demonstrate the application of our methods using a renal study.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call