Abstract

Despite persistent concerns as to the quality of performance information obtained from multisource performance ratings (MSPRs), little research has sought ways to improve the psychometric properties of MSPRs. Borrowing from past methodologies designed to improve performance ratings, we present a new method of presenting items in MSPRs, frame‐of‐reference scales (FORS), and test the efficacy of this method in a field and lab study. The field study used confirmatory factor analysis to compare the FORS to traditional rating scales and revealed that FORS are associated with increased variance due to dimensions, decreased overlap among dimensions, and decreased error. The laboratory study compared rating accuracy associated with FORS relative to frame‐of‐reference training (FORT) and a control group and demonstrated that FORS are associated with higher levels of accuracy than the control group and similar levels of accuracy as FORT. Implications for the design and implementation of FORS are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call