Abstract

ABSTRACT This study employed an experiment to test participants’ perceptions of an artificial intelligence (AI) recruiter. It used a 2 (Specialist AI/Generalist AI) × 2 (Sexist/nonsexist) design to test the relationship between these labels and the perception of moral violations. The theoretical framework was an integration of the Computers Are Social Actors (CASA) and Elaboration Likelihood Model (ELM) approaches. Participants (n = 233) responded to an online questionnaire after reading one of four scenarios involving an AI recruiter’s evaluation of job candidates. Results found that the concept of “mindlessness” in CASA is situational, based on whether the issue is processed with the central route or the peripheral route. Moreover, this study shows that CASA can explain the evaluation of machines with the third-person point of view. Also, there was a distinction between the perception of the AI and its decisions. Furthermore, participants were found to be more sensitive about the AI agent’s sexism – which was more anthropomorphic and emotionally engaging – than about the AI agent’s status as a specialist.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call