Many scenario-based assessments (e.g., interviews, assessment center exercises, work samples, simulations, and situational judgment tests) use prompts (i.e., cues provided to respondents to increase the likelihood that the information received from them is clear, sufficient, and job-related). However, a dilemma for practitioners and researchers is how general or specific one should prompt people's answers. We posit that such differences in prompt-specificity (i.e., extent to which prompts cue performance criteria) have important implications for the predictive validity of scenario-based assessment scores. Drawing on the interplay of situation construal and situational strength theory, we propose that prompt-specificity leads to differential relationships between scenario-based scores and external constructs (personality traits vs. knowledge), which in turn affects the predictive validity of scenario-based assessments. We tested this general hypothesis using intercultural scenarios for predicting effectiveness in multicultural teams. Using a randomized predictive validation design, we contrast scores on these scenarios with general (N = 157) versus specific (N = 158) prompts. As a general conclusion, prompt-specificity mattered: Lesser prompt-specificity augmented the role of perspective taking and openness-to-experience in the intercultural scenario scores and their validity for predicting intercultural performance, whereas greater prompt-specificity increased the role of knowledge in these scores and their validity for predicting in-role performance. This study's theoretical and practical implications go beyond a specific assessment procedure and apply to a broad array of assessment and training approaches that rely on scenarios. (PsycInfo Database Record (c) 2021 APA, all rights reserved).