Abstract

Identifying relative idiosyncratic and shared contributions to judgments is a fundamental challenge to the study of human behavior, yet there is no established method for estimating these contributions. Using edge cases of stimuli varying in intrarater reliability and interrater agreement-faces (high on both), objects (high on the former, low on the latter), and complex patterns (low on both)-we showed that variance component analyses (VCAs) accurately captured the psychometric properties of the data (Study 1). Simulations showed that the VCA generalizes to any arbitrary continuous rating and that both sample and stimulus set size affect estimate precision (Study 2). Generally, a minimum of 60 raters and 30 stimuli provided reasonable estimates within our simulations. Furthermore, VCA estimates stabilized given more than two repeated measures, consistent with the finding that both intrarater reliability and interrater agreement increased nonlinearly with repeated measures (Study 3). The VCA provides a rigorous examination of where variance lies in data, can be implemented using mixed models with crossed random effects, and is general enough to be useful in any judgment domain in which agreement and disagreement are important to quantify and in which multiple raters independently rate multiple stimuli.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.