Abstract

A method is proposed for the analysis of nonagreements among multiple raters. The method is based on a notion of impartiality of nonagreeing assignments, and it aids in the detection of areas of possible confusion among different categories of a classification scale. A large-sample theory is derived for testing impartiality, and the methods are illustrated with published data on psychiatric classifications. Connections with kappa statistics for measuring rater agreement are also considered.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call