Abstract

Peer grading affords a scalable and sustainable way of providing assessment and feedback to a massive student population, and has been used in massive open online courses (MOOCs) on the Coursera platform. However, currently there is little empirical evidence to support the credentials of peer grading as a learning assessment method in the MOOC context. To address this research need, this study examined 1825 peer grading assignments collected from a Coursera MOOC with the purpose of investigating the reliability and validity of peer grading as well as its perceived effects on students’ MOOC learning experience. The empirical findings proved that the aggregate ratings of student graders can provide peer grading scores that were fairly consistent and highly similar to the instructor grading scores. Student responses to a survey also show that the peer grading activity was well received as the majority of MOOC students believed it was fair, useful, beneficial, and would recommend it to be included in future MOOC offerings. Based on the empirical results, this study concludes with a set of principles for designing and implementing peer grading activities in the MOOC context.

Highlights

  • The recent development of Massive Open Online Courses (MOOCs) has provided instructors with exciting opportunities to teach to a massive and diverse student population through learning platforms such as Coursera, EdX, and Udacity

  • It is not surprising to find that the source of error is individual student graders rather than the grading criteria, considering MOOC students can be from different backgrounds and vary greatly in terms of knowledge and skills needed for providing accurate evaluation, and no training on grading is typically provided to the students

  • These results suggest that in general, the joint efforts of multiple student graders can produce fairly consistent grading results using Coursera’ peer review system

Read more

Summary

Introduction

The recent development of Massive Open Online Courses (MOOCs) has provided instructors with exciting opportunities to teach to a massive and diverse student population through learning platforms such as Coursera, EdX, and Udacity. One major problem relates to providing MOOC students with timely, accurate, and meaningful assessment of their course assignments since enrollment in a MOOC can be as large as hundreds of thousands of students (Pappano, 2012; Piech, et al, 2013), exceeding the grading capacity of a single instructor or teaching assistant. In an attempt to solve this assessment problem, Coursera has incorporated a peer review system in its learning platform that guides students in using grading rubrics to evaluate and provide feedback for each other’s work. Findings from this study provide empirical evidence on the reliability, validity, and perceived effects of MOOC-scale peer grading

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.