Abstract

Principles of skill acquisition dictate that raters should be provided with frequent feedback about their ratings. However, in current operational practice, raters rarely receive immediate feedback about their scores owing to the prohibitive effort required to generate such feedback. An approach for generating and administering feedback responses to raters is proposed. It consists of automatically designating some responses as feedback responses, sourcing scores, and elaborations for these responses from a group of raters as part of regular scoring and, finally, administering the same responses to all other raters with immediate feedback based on a summary of the available scores and elaborations. This approach allows raters to receive frequent immediate feedback on a regular basis in a sustainable way. In two experimental studies, the effect of frequent immediate feedback (in approximately 25% of responses) on rating accuracy of newly trained raters was investigated. A control condition of no feedback was compared with two types of feedback with elaboration: text explanations of the correct score and a structured form identifying the strengths and weaknesses of the response. Results indicate that feedback had a beneficial effect on rater accuracy and that structured feedback was either equally beneficial to or more beneficial than text explanations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.