Abstract

An important challenge in effectively implementing the peer assessment process is represented by the validity of the grades assigned by the students to their peers. Validity is defined as the level of agreement between the grades given by the students and the reference ones, given by the teacher. The literature offers conflicting perspectives on the validity of the peer assessment process, and very few works investigate the validity of peer grading in project-based learning (PBL) settings. Hence in this paper we address this less explored direction, by applying peer assessment in conjunction with PBL in a Human-Computer Interaction course; a dedicated platform called LearnEval is used to support the peer assessment process and 27 students participate in the study. Two main research questions are investigated: (1) How does the validity of the peer grades evolve throughout the semester, over several peer assessment sessions? (2) How does the grading mechanism provided by LearnEval compare to a baseline approach which relies on the mean of the peer grades? The preliminary findings are encouraging but the study also reveals some limitations and areas for improvement.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call