Abstract

Recently, multiple, speeded assessments (e.g., "speeded" or "flash" role-plays) have made rapid inroads into the selection domain. So far, however, the conceptual underpinning and empirical evidence related to these short, fast-paced assessment approaches has been lacking. This raises questions whether these speeded assessments can serve as reliable and valid indicators of future performance. This article uses the notions of stimulus and response domain sampling to conceptualize multiple, speeded behavioral job simulations as a hybrid of established simulation-based selection methods. Next, we draw upon the thin slices of behavior paradigm to theorize about the quality of ratings made in multiple, speeded behavioral simulations. In two studies, various assessor pools assessed a sample of 96 MBA students in 18 3-min role-plays designed to capture situations in the junior management domain. At the individual speeded role-play level, reliability and validity were not ensured. Yet, aggregated across all assessors' ratings of all speeded role-plays, the overall score for predicting future performance was high (.54). Validities remained high when assessors evaluated only the first minute (vs. full 3 min) or received only a control training (vs. traditional assessor training). Aggregating ratings of performance in multiple, heterogeneous situations that elicit a variety of domain-relevant behavior emerged as key requirement to obtain adequate domain coverage, capture both ability and personality (extraversion and agreeableness), and achieve substantial validities. Overall, these results show the importance of the stimulus and response domain sampling logic and send a strong warning to using "single" speeded behavioral simulations in practice. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call