Abstract

Assessment Centers (ACs) are a diagnostic tool that serve as a basis for decisions in the context of personnel selection and employee development. In view of the far-reaching consequences that AC ratings can have, it is important that these ratings are accurate. Therefore, we need to understand what AC ratings measure and how the measurement of dimensions, that is, construct-related validity, can be improved. The aims of this thesis are to contribute to the understanding of the construct-related validity of ACs and to provide practical guidance in this regard. Three studies that offer different perspectives on rating accuracy and AC construct-related validity, respectively, were conducted. The first study investigated whether increasing assessor team size can compensate for missing assessor expertise (i.e., assessor training and assessor background) and vice versa to improve rating accuracy. On the basis of dimension ratings from a laboratory setting (N = 383), we simulated assessor teams of different sizes. Of the factors considered, assessor training was most effective in improving rating accuracy and it could only partly be compensated for by increasing assessor team size. In contrast, increasing the size of the assessor team could compensate for missing expertise related to assessor background. In the second study, the effects of exercise similarity on AC construct-related and criterion-related validity were examined simultaneously. Data from a simulated graduate AC (N = 92) revealed that exercise similarity was beneficial for construct-related validity, but that it did not affect criterion-related validity. These results indicate that improvements in one aspect of validity are not always paralleled by improvements in the other aspect of validity. The third study examined whether relating AC overall dimension ratings to external evaluations of the same dimensions can provide evidence for construct-related validity of ACs. Confirmatory factor analyses of data from three independent samples (Ns = 428, 121, and 92) yielded source factors but no dimension factors in the latent factor structure of AC overall dimension ratings and external dimension ratings. This means that different sources provide different perspectives on candidates’ performance, and that AC overall dimension ratings and external dimensions ratings cannot be attributed to the purported dimensions. Taken as a whole, this thesis looked at AC construct-related validity from different angles. The reported findings contribute to the understanding of rating accuracy and construct-related validity of ACs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call