Abstract

The alternative assessment, including peer assessment, helps students develop metacognition among the sub-categories of assessment types. Despite the advantage of alternative assessment, reliability and validity issues are the most significant problems in alternative assessment. This study investigated the rater drift, one of the rater effects, in peer assessment. The performance of 8 oral presentations based on group work in the Science and Technology course was scored by 7th-grade students (N=28) using the rubric researchers developed. The presentations lasted for four days, with two presentations each day. While examining the time-dependent drift in rater severity in peer assessment, the many-Facet Rasch Measurement model was used. Two indexes (interaction term and standardized differences) were calculated with many-facet Rasch measurement to determine the raters who made rater drift either individually or as a group. The analysis examined the variance of scores in the following days compared to the first day’s scores. Accordingly, the two methods used to determine rater drift gave similar results, and some raters at the individual level tended to be more severe or lenient over time. However, no significant rater drift at the group level showed that drifts had no specific models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call