Abstract

Peer evaluations are commonly used in design courses for developmental and evaluative purposes. Peer ratings are however often higher than the instructor ratings and this can induce fears regarding their reliability. This study examined whether it may be possible to increase the agreement between peer and instructor ratings by increasing the frequency of the peer assessments. The premise was that peers may provide less lenient assessments if the impact of single evaluations on the final grade is reduced by increasing the number of evaluations. Increasing the number of peer evaluations in our senior design course from two to six per year did not increase the accuracy of the peer ratings but provided other benefits such as earlier identification of dysfunctional teams, elimination of free riding and more frequent developmental feedback. Peer and instructor ratings can be normalized however to yield similar indicators of the relative performance of teammates. The frequency and timing of peer evaluations are critical to obtain meaningful results and maximize impact on team dynamics

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call