Abstract

None

Highlights

  • A decade ago, a larger meta-analysis revealed that peer marks tend to agree well with teacher marks, in particular if a global judgment was made and if it was based on well-understood assessment criteria [3]

  • For example, been reported that even when a scoring rubric is co-created with students, teachers cannot expect that students know how to apply it independently

  • For example, commented on the use of scoring rubrics used in peer assessment [2]: Students are not always good at peer- and self-assessment at first, even with a rubric in hand

Read more

Summary

Introduction

A decade ago, a larger meta-analysis revealed that peer marks tend to agree well with teacher marks, in particular if a global judgment was made and if it was based on well-understood assessment criteria [3]. Others have pointed out that a discrepancy between student and teacher understanding of the assessment criteria would give rise to inconsistencies in their marking when compared, and that the lack of sufficiently broad background knowledge (expertise) on the topic to be assessed, such as for example not having read the same references as the peers being assessed, can be a perceived challenge for peer markers [4].

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call