ABSTRACTAlthough Assessment Center (AC) role‐play assessments have received ample attention in past research, their reliance on actual behavioral information is still unclear. Uncovering the behavioral basis of AC role‐play assessments is, however, a prerequisite for the optimization of existing and the development of novel automated AC procedures. This work provides a first data‐driven benchmark for the behavioral prediction and explanation of AC performance judgments. We used machine learning models trained on behavioral cues (C = 36) to predict performance judgments in three interpersonal AC exercises from a real‐life high‐stakes AC (selection of medical students, N = 199). Three main findings emerged: First, behavioral prediction models showed substantial predictive performance and outperformed prediction models representing potential judgment biases. Comparisons with in‐sample results revealed overfitting of traditional approaches, highlighting the importance of out‐of‐sample evaluations. Second, we demonstrate that linear combinations of behavioral cues can be strong predictors of assessors' judgments. Third, we identified consistent exercise‐specific patterns of individual cues and cross‐exercise consistent behavioral patterns of behavioral dimensions and interpersonal strategies that were especially predictive of the assessors' judgments. We discuss implications for future research and practice.
Read full abstract