Abstract

ABSTRACT Peer assessment has become a primary solution to the challenge of evaluating a large number of students in Massive Open Online Courses (MOOCs). In peer assessment, all students need to evaluate a subset of other students’ assignments, and then these peer grades are aggregated to predict a final score for each student. Unfortunately, due to the lack of grading experience or the heterogeneous grading abilities, students may introduce unintentional deviations in the evaluation. This paper proposes and implements a semi-supervised peer assessment method (SSPA) that incorporates a small number of teacher’s gradings as ground truth, and uses them to externally calibrate the procedure of aggregating peer grades. Specifically, each student’s grading ability is directly (if students have common peer assessments with teacher) or indirectly (if students have no common peer assessments with teacher) measured with the grading similarity between the student and teacher. Then, SSPA utilizes the weighted aggregation of peer grades to infer the final score of each student. Based on both real dataset and synthetic datasets, the experimental results illustrate that SSPA performs better than the existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call