Abstract

In‐baskets are high‐fidelity simulations often used to predict performance in a variety of jobs including law enforcement, clerical, and managerial occupations. They measure constructs not typically assessed by other simulations (e.g., administrative and managerial skills, and procedural and declarative job knowledge). We compiled the largest known database (k = 31; N = 3,958) to address the criterion‐related validity of in‐baskets and possible moderators. Moderators included features of the in‐basket: content (generic vs. job specific) and scoring approach (objective vs. subjective) and features of the validity studies: design (concurrent vs. predictive) and source (published vs. unpublished). Sensitivity analyses assessed how robust the results were to the influence of various biases. Results showed that the operational criterion‐related validity of in‐baskets was sufficiently high to justify their use in high‐stakes settings. Moderator analyses provided useful guidance for developers and users regarding content and scoring.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call