Abstract
Peer and self-evaluations were conducted on interdependent multidisciplinary self-directed work teams. Teams worked on two projects during the semester with membership changing between the two projects. Generalizability theory was used to partition the peer ratings into three variance components; rater effect, ratee effect, and the variance unaccounted for by the rater and ratee effects (rater by ratee interaction). The correspondence between the ratee effect and criterion measures for effort applied to the task and technical knowledge applied to the task was used as a measure of the peer evaluation validity. Self-evaluations were compared to the same criterion measures. The Fisher's average correlation coefficient between the criterion measure and the ratee effect from the peer evaluations was 0.69. In contrast, Fisher's average correlation coefficient between the criterion measure and self-evaluations was 0.25. Peer evaluations had greater validity than self-evaluations in the context of interdependent teamwork.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.