Abstract

Accuracy of peer review continues to be a concern for instructors in implementing computer-supported peer review in their instructional practices. A large body of literature has descriptively documented overall levels of reliability and validity of peer review and which factors across different peer review implementations impact overall reliability and validity of peer review (e.g., use of rubrics, education level, training). However, few studies have examined what factors within a peer review implementation contribute to review accuracy of individual reviews and knowledge about these factors could shape new interventions to avoid or remediate errors in particular reviews. In the current study, we tested a three-level framework (reviewer, essay, and reviewing process) for predicting the location of peer review errors. Further, we examined what factors within each level are predictive of two different types of review errors: severity and leniency. Leveraging a large dataset from an Advanced Placement English and Composite course implementing a common assignment with web-based peer review across 10 high schools, we found support for all levels in the framework and the importance of separating severity and leniency errors: review comment length predicted both severe and lenient errors but in opposite directions: longer comments are more likely to be associated with severe errors and less likely to be associated with lenient errors; review disagreement, reviewer ability and average sentence length of comments predicted severe errors; and essay quality predicted lenient errors. Implications for the development of new web-based tools for supporting peer-review are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call