Abstract

It is valuable in many studies to assess both intrarater and interrater agreement. Most measures of intrarater agreement do not adjust for unequal estimates of prevalence between the separate rating occasions for a given rater and measures of interrater agreement typically ignore data from the second set of assessments when raters make duplicate assessments. In the event when both measures are assessed there are instances where interrater agreement is larger than at least one of the corresponding intrarater agreements. This implies that a rater agrees less with him/herself and more with another rater. In the situation of multiple raters making duplicate assessments on all subjects, the authors propose properties for an agreement measure based on the odds ratio for a dichotomous trait: (i) estimate a single prevalence across two reading occasions for each rater; (ii) estimate pairwise interrater agreement from all available data; (iii) bound the pairwise interrater agreement above by the corresponding intrarater agreements. Estimation of odds ratios under these properties is done by maximizing the multinomial likelihood with constraints using generalized log-linear models in combination with a generalization of the Lemke-Dykstra iterative-incremental algorithm. An example from a mammography examination reliability study is used to demonstrate the new method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.