Abstract

Practitioners have embraced the use of Artificial Intelligence (AI) systems for employee recruitment and selection. However, studies examining applicant reactions to AI have been exclusively vignette-based with no perceived outcome associated with the decision and also have not considered demographic differences in AI evaluator perceptions. We employed an experimental design wherein type of evaluator (AI vs human) and the selection decision (acceptance vs rejection) were manipulated and participants were led to believe they would receive different outcomes based on the selection decision. The results showed more negative interactional justice perceptions for AI evaluators. Further, interaction analyses revealed that being rejected by AI had a negative impact on certain procedural and general justice perceptions. We compared Black and White applicants on these perceptions, finding that the negative impact of being rejected by AI was particularly strong for Black applicants in terms of their general justice perceptions. Theoretical and practical implications are discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.