Abstract

Abstract This article reviews some well‐known indices of agreement, the conceptual and statistical issues related to their estimation, and their interpretation for both categorical and interval scale measurements. Particular measures of agreement that are discussed include kappa, a measure of category distinguishability, measure for agreement with nominal responses, monotonic agreement, and agreement measures for 2 × 2 tables. Agreement of multiple raters for the same subject is also discussed, together with the use of unanimity and majority rules. Software to carry out these calculations is also mentioned. Since the publication of the first review, several books (Dunn, G. (2004); Broemeling, L. D. (2009). Bayesian Methods for Measures of Agreement , Chapman & Hall/CRC Biostatistics Series, Boca Raton, FL; Gwet, K. L. Handbook of Inter‐rater Reliability , 3rd Edition, Advanced Analytics, LLC, Gaithersburg, MD; Shoukri, M. M. (2010). Measures of Interobserver Agreement and Reliability , 2nd Edition, Chapman & Hall/CRC Biostatistics Series, Boca Raton, FL; Von Eye, A. and Mun, E. Y. (2004). Analyzing Rater Agreement: Manifest Variable Methods Hardcover . ISBN‐13: 978–08058496771 ISBN‐10:080584967X Edition: Har/Cdr.) have been published, which is an indication that the topic is of interest to applied statisticians, clinicians, and other practitioners.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.