Abstract
Implementing backward evaluation as part of the peer assessment process enables students to react to the feedback they receive on their work within one peer assessment activity cycle. The emergence of online peer assessment platforms has brought new opportunities to study the peer assessment process, including backward evaluation, through the digital data that the use of these systems generates. This scoping review provides an overview of peer assessment studies that use backward evaluation data in their analyses, identifies different types of backward evaluation and describes how backward evaluation data have been used to increase understanding of peer assessment processes. The review contributes to a mapping of backward evaluation terminology and shows the potential of backward evaluation data to give new insights on students’ perceptions of what is useful feedback, their reactions to the feedback received and its consequences for feedback implementation.
Highlights
Backward evaluation (BE) is defined as ‘the feedback that an author provides to a reviewer about the quality of the review’ (Luxton-Reilly, 2009, p. 226)
This paper offers a scoping review of backward evaluation (BE) in peer assessment (PA) research, focussing on study characteristics, BE characteristics and the use of BE data
We found relatively few empirical studies on PA that use BE, they offer new insights into different aspects of the PA process
Summary
Backward evaluation (BE) ( called back-review or back-evaluation) is defined as ‘the feedback that an author provides to a reviewer about the quality of the review’ (Luxton-Reilly, 2009, p. 226). The common PA practice includes a student (author) developing an artefact that is later reviewed by a peer (reviewer) who gives feedback to the artefact developer (author) This feedback should be reflected on and can be used to improve the original artefact. Li, Xiong, Zang, and KornhaberLyuChungK.Suen (2016) found a moderately strong correlation between peer and teacher grades, whereas Falchikov and Goldfinch (2000) related a higher validity of PA with the design of the PA activity. Aspects such as clear criteria and more guidance led to higher agreement between teacher and student grading. In a series of three experiments, Hicks, Pandey, Fraser, and Klemmer (2016) showed how different kinds of questions in rubrics, the structure of a task or the way artefacts are presented led to different results in terms of feedback quality and the focus of the reviewer
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.