Abstract

This research examines whether consumers ascribe racial stereotypes to artificially intelligent (AI; nonhuman) agents and whether these stereotypes impact ratings of satisfaction, perceptions of competence and humanness, and outcomes of negotiated transactions. Drawing on the stereotype content model, expectation violation theory, and the humanness-value-loyalty framework, we investigate how consumers apply racial stereotype judgments in interactions with artificially intelligent agents in a controlled negotiation experiment. Results reveal that although Black people, in general, are more likely to be stereotyped as less competent than Asian or White people, the opposite is true for Black AI bots. Furthermore, perceptions of competence and humanness of Black AI bots supersede those of Asian and White AI bots, leading to increased ratings of overall satisfaction, and some evidence of more favorable negotiation behaviors. Implications for AI applications in marketing are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call