Abstract

Recent Monte Carlo research has illustrated that the traditional method for assessing the construct-related validity of assessment center (AC) post-exercise dimension ratings (PEDRs), an application of confirmatory factor analysis (CFA) to a multitrait–multimethod matrix, produces inconsistent results [Lance, C. E., Woehr, D. J., & Meade, A. W. (2007). Case study: A Monte Carlo investigation of assessment center construct validity models. Organizational Research Methods, 10, 430–448]. To avoid this shortcoming, a variance partitioning procedure was applied to the examination of the PEDRs of 193 individuals. Overall, results indicated that the person, dimension, and person by dimension interaction effects together accounted for approximately 32% of the total variance in AC ratings. However, despite no apparent exercise effect, the person by exercise interaction accounted for approximately 28% of the total variance. Although these results are drawn from a single AC, they nevertheless provide general support for the overall functioning of ACs and encourage continued application of variance partitioning approaches to AC research. Implications for AC design and research are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call