Abstract

This study examines fidelity of ranking and rating scales in the context of online peer review and assessment. Using the Monte-Carlo simulation technique, we demonstrated that rating scales outperform ranking scales in revealing the relative true latent quality of the peer-assessed artifacts via the observed aggregate peer assessment scores. Our analysis focused on a simple, single-round peer assessment process and took into account peer assessment network topology, network size, the number of assessments per artifact, and the correlation statistics used. This methodology allows to separate the effects of structural components of peer assessment from cognitive effects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call