Abstract

Papers in a large second-year science class were assessed using an anonymous peer review system modelled on that used for professional journals. Three students and one paid marker (outside reviewer) reviewed each paper--and each student received four reviews. A paid marker served as 'editor' and determined marks based on the four reviews, with reference to the paper as necessary. Students were asked to rank the four reviews for helpfulness and for completeness and accuracy. Consistency of reviews was analysed. On average, peer reviewers gave higher marks than paid markers, and on average students found peer reviewers to be more helpful but marginally less complete and accurate than paid markers. The differences among paid markers, however, were larger than the difference between the average peer reviewer and the average paid marker. The consistency among the four sets of marks given was not impressive. Students responded to the range of reviews they received. It can be shown statistically that the expected range for four reviews is much greater than that expected for two reviews--thus the multiplicity of reviews received exacerbated a widespread perception that marks were arbitrary. The net outcome was a moral dilemma. Giving the same paper to multiple assessors reveals the extent to which assessment rests on arbitrary factors. This may be good preparation for the real world; however, it is not an exercise to be taken lightly, and not recommended without prior preparation of context.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call