Abstract

Ongoing concern about the construct validity of assessment center dimensions has focused on postexercise dimension ratings (PEDRs) that are consistently found to reflect exercise variance to a greater degree than dimension variance. Here, we present a solution to this problem. Based on the argument that PEDRs are an intermediate step toward an overall dimension rating, and that the overall dimension rating should be the focus of inquiry, we demonstrate that correlated sources of dimension variance accumulate and increasingly displace uncorrelated sources of both systematic variance and error. Viewing overall dimension ratings as a composite of PEDRs, we show dimension variance will commonly quickly overtake exercise-specific variance as the dominant source of variance as ratings from multiple exercises are combined. We embed our results in a new framework for categorizing different levels of construct variance dominance, and our results indicate that with as few as two exercises, dimension variance can reach our lowest level of construct variance dominance. However, the largest source of dimension variance is a general factor. We conclude that the construct validity problem in assessments centers never existed as historically framed, but the presence of a general factor may limit interpretation for developmental purposes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.