Abstract

Peer reviews are the generally accepted mode of quality assessment in scholarly communities; however, they are rarely used for evaluation at college levels. Over a period of 5 years, we have performed a peer review simulation at a senior level course in molecular genetics at the University of Guelph and have accumulated 393 student peer reviews. We have used these to generate a summary of the metrics of this exercise. Our calculations show that student peer marks are highly variable and not suitable for numerical performance evaluation at the university level. On the other hand, student peer reviews can clearly recognize substandard performance. Hence, peer reviews can be used for the assessment of "pass/fail" type of assignments. Interestingly, student peers struggle to distinguish between good and excellent performance. These finding provide provocative insight on the process of peer review in general. We comment on the implications of this in-class simulation for research communities and on potential pitfalls of peer reviews.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call