Abstract

AbstractDecision making processes often rely on subjective evaluations provided by human raters. In the absence of a gold standard against which check evaluation trueness, rater's evaluative performance is generally measured through rater agreement coefficients. In this study some parametric and non‐parametric inferential benchmarking procedures for characterizing the extent of rater agreement—assessed via kappa‐type agreement coefficients—are illustrated. A Monte Carlo simulation study has been conducted to compare the performance of each procedure in terms of weighted misclassification rate computed for all agreement categories. Moreover, in order to investigate whether the procedures overestimate or underestimate the level of agreement, misclassifications have been computed also for each specific category alone. The practical application of coefficients and inferential benchmarking procedures has been illustrated via two real data sets exemplifying different experimental conditions so as to highlight performance differences due to sample size.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.