Abstract

The extent of leniency error, the extent of halo effect, and the accuracy of ratings on behavioral expectation scales were compared across three groups of student raters (N - 72). One group of raters (RET) underwent a training program on rating error, involving definitions, graphic illustrations, and numerical examples of leniency and halo. A second group (RAT) heard lectures on the multidimensionality of teacher performance, generated and defined dimensions of performance, discussed behavioral examples of each dimension, and attempted to develop stereotypes of effective and ineffective performance. The third group received no training. Following the training, two hypothetical ratees, described in written vignettes, were each rated on 13 dimensions of performance. Ratings from the RET group had significantly less leniency and halo error than ratings from the RAT group and the control group. However, when ratings were compared to previously developed true scores on the ratees, significantly less accuracy was found for the RET group than for the other two groups. No significant differences were found between the RAT group and the control group ratings. Results are discussed in terms of the facilitation of a response set in raters trained by denning and illustrating errors through rating distributions. The response set results in lower mean ratings (less leniency) and lower scale intercorrelations (lower halo) while lowering the accuracy of ratings. Suggestions for the improvement of training programs are made.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call