Abstract

This paper addresses the problem of using accuracy index values based on the squared difference between participant scores and true scores, the D2 index, at the practical level. It clarifies ambiguity existing in the literature regarding the use of these index values to evaluate the scoring accuracy of human raters (evaluators). The paper critically investigates the effect of frame‐of‐reference (FOR) training on improving the accuracy of third‐party evaluators’ scores for organisations, such as those going through the Malcolm Baldrige National Quality Award (MBNQA) self‐assessment exercise. It discusses a case study where 90 individual participants took part. The scores of these participants were recorded before training was given to them (no training) and after receiving FOR training. The study showed that providing FOR training has an effect on improving the elevation accuracy index (p < 0.05) in five of the seven categories used in this exercise. An observed leniency effect was also reduced. However, no improvement in the DA was observed. Thus, the evaluators’ ability to assign an accurate overall score was improved, while the ability to discriminate between relative strengths and weaknesses did not show improvement. This implies evaluator training, particularly for heterogeneous pools of volunteers like those of corporate and state and local quality awards, should include more content on the performance dimensions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.