Abstract

The peer assessment process is a complex learning task that allows students to both evaluate their peer's work as well as receive feedback on their own. This method is utilized in massive open online courses (MOOCs) as a means to incorporate open-ended assessments at a large scale, giving each learner feedback on their work without necessitating the high labor costs of instructor grading. Additionally, peer assessment can contribute towards creating a learning community in MOOCs, where learners can interact with each other's work and learning experiences. Critics of this assessment type point out that peers are unqualified to grade or provide high-quality feedback to each other, particularly in MOOCs where the barrier to entry is low, and pre-requisites are rarely enforced. Others argue that if the peer assessment process is well-designed, then the overall learning experience can be worthwhile. This study seeks to examine the impact of the role of anonymity and transparency of feedback in peer assessment in MOOCs. The study compares two-course runs of two independent MOOCs hosted on edX.org which each used an anonymous peer assessment tool in the first run and an open transparent peer assessment tool in the second run. Specifically, the study compared the quality of student work, the correspondence of peer grades to instructor grades, and the quality of qualitative peer feedback. The study found that non-anonymous and transparent peer assessment produced higher quality feedback than the anonymous tool, though did not have an impact on the quality of work or consistency of peer grades.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call