Abstract

Juries are a high-stake practice in higher education to assess complex competencies. However common, research remains behind in detailing the psychometric qualities of juries, especially when using rubrics or rating scales as an assessment tool. In this study, I analyze a case of a jury assessment (N = 191) of product development where both internal teaching staff and external judges assess and fill in an analytic rating scale. Using polytomous item response theory (IRT) analysis developed for the analysis of heterogeneous juries (i.e. jury response theory or JRT), this study attempts to provide insight into the validity and reliability of the used assessment tool. The results indicate that JRT helps detect unreliable response patterns that indicate an excellence bias, i.e. a tendency not to score in the highest response category. This article concludes with a discussion on how to counter such bias when using rating scales or rubrics for summative assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call