Abstract

Peer and self-assessment open opportunities to scale assessments in online classrooms. This article reports our experiences of using AsPeer- peer assessment system, with two iterations of a university online class. We observed that peer grades highly correlated with staff assigned grades. It was recorded that, the peer grade of all student submissions within the range of instructor grade averaged to 21.0% and that within the next 2 ranges was 49.0%. We performed three experiments to improve accuracy of peer grading. First, we observed grading bias and introduced a data driven feedback mechanism to inform peers of it. Students aided by feedback were mindful and performed grading with better accuracy. Second, we observed that the rubric lacked efficiency in translating intent to students. Simplified guiding questions improved accuracy in assessment by 89% of students. Third, we encouraged peers to provide personalized qualitative feedback along with rating. We provided them with feedback snippets that addressed common issues. 64% of students responded that the snippets helped them to critically look at submissions before rating.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call