Abstract

Inter‐rater agreement in a peer performance evaluation system was analyzed using a sample of 44 individuals who rated focal persons in seven teams. Objective information concerning individual performance on multiple choice tests, as well as information gleaned from individual contributions to team testing and team graded exercises, resulted in high inter‐rater reliabilities (assessed via ICCs) and strong criterion related validity for the performance evaluation instrument. A discussion centers on the effect of providing objective job performance information to evaluation participants.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call