Abstract

Practitioners have embraced the use of Artificial Intelligence (AI) systems for employee recruitment and selection. However, studies examining applicant reactions to AI have been exclusively vignette-based with no perceived outcome associated with the decision and also have not considered demographic differences in AI evaluator perceptions. We employed an experimental design wherein type of evaluator (AI vs human) and the selection decision (acceptance vs rejection) were manipulated and participants were led to believe they would receive different outcomes based on the selection decision. The results showed more negative interactional justice perceptions for AI evaluators. Further, interaction analyses revealed that being rejected by AI had a negative impact on certain procedural and general justice perceptions. We compared Black and White applicants on these perceptions, finding that the negative impact of being rejected by AI was particularly strong for Black applicants in terms of their general justice perceptions. Theoretical and practical implications are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call