Abstract

This paper is concerned with measuring agreement in test-retest studies of reliability. Discussion is confined principally to the 2 × 2 case. The commonly used index of agreement is calculated as the number of subjects identically classified by both test and retest divided by the total number of individuals classified. The inadequacies of this index (referred to as the index of ‘crude agreement’ and denoted by A) are discussed. In light of the deficiencies of A, an index of ‘adjusted agreement’ denoted by A 1 is proposed: A 1 = 1 4( a a + b + a a + c + d c + d + d b + d ) , where a and d are the cells of agreement; b and c the cells of disagreement. By the nature of its construction, A 1 yields the very useful result that expected agreement (based on observed marginals) is always 1 2 or 50 per cent. The two indexes A and A 1 are compared by utilizing numerical examples and by application to published studies. The limitations of A 1 are discussed. A test of significance for the 2 × 2 case, and the extension of A 1 to the n × n case are considered.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.