Abstract

Agreement is a broad term simultaneously covering evaluations of accuracy and precision of measurements. Assessment of observer agreement is based on the similarity between readings made on the same subject by different observers. The assessment of agreement on categorical observations is traditionally based on kappa or weighted kappa coefficients. However, kappa statistics have been criticized because they attain implausible values when the marginal distributions are skewed and/or unbalanced. New scaled indices called the coefficients of individual agreement (CIAs) have been developed for the assessment of individual observer agreement by comparing the observed disagreement between two observers to the disagreement between replicated observations made by the same observer on the same subject. This is based on the notion that under a good agreement, the disagreement between the two observers is usually not expected to exceed the disagreement between replicated observations of the same observer, and hence, a satisfactory agreement is established if these quantities are similar. This idea is extended and a new method utilizing the generalized linear mixed model is proposed to estimate the CIAs for binary data which consist of matched sets of repeated measurements made by the same observer under different conditions. The conditions may represent different time points, raters, laboratories, treatments, etc. The new approach allows the values of the measured variable and the magnitude of agreement to vary across the conditions. The reliability of the estimation method is examined via a simulation study. Data from a study aiming at determining the validity of diagnosis of breast cancer based on mammography are used to illustrate the new concepts and methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call