Abstract

AbstractAs an innovative evaluation tool, peer assessment is essential in Massive Open Online Courses (MOOCs). In both formative and summative peer assessments in MOOCs, providing reliable feedback is crucial in enhancing learning outcomes. Peer assessment has been highlighted as a reliable tool in both traditional classrooms and small‐scale online learning contexts by a wealth of studies. Factors impacting the result of peer assessment have also been identified. However, such exploration in MOOCs is very limited. Through an examination of 5,722 assignment submissions and 56,794 review scores collected from 18 peer assessments in three courses on Chinese University MOOC platform, this study investigated the inter‐reliability of peer assessment in MOOCs utilizing Krippendorff's alpha and ICC[1]. It also revealed factors affecting peer assessment reliability in MOOCs. The results of this study show that students tended to mark extreme scores, and peer assessment in MOOCs seemed unreliable. In addition, the reliability of the peer assessment in these MOOCs was inversely correlated to the number of reviewers per assignment and reviews completed per reviewer. The assignment type also matters; peer assessment using e‐portfolios format was found more reliable than those using papers and proposals. Suggestions on how to improve the reliability of peer assessments in MOOCs are also provided. Practitioner notesWhat is already known about this topic Both the formative and summative peer assessment mode, providing reliable feedback is needed, which is of great value for improving the quality of online learning. The reliability and influence factors of peer assessment has been highlighted by a wealth of studies in traditional classrooms and small‐scale online learning contexts. There are still limited explorations in MOOCs. What the paper adds The students tended to give extreme scores in peer assessment in MOOC, and the peer assessment in MOOC may was not particularly reliable. The Assignment type being rated affected the reliability, and peer assessment using e‐portfolios format was found more reliable than those using papers and proposals. The reliability of peer assessment in MOOC was negative related to the number of reviewers per assignment and the reviews completed per reviewer. Implications for practice and/or policy Avoid using peer assessment as a summative assessment method, and should be assigned relatively low weights to final grades. Using papers and proposals as learning assignments can appropriately reduce the proportion of peer assessment in the final grade. Training on how to identify medium quality work should be provided. Instructional strategies should be designed to encourage learners to actively participate in peer assessments and thoroughly assess.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call