Abstract

The use of Artificial Intelligence (AI) has grown rapidly in the service industry and AI’s emotional capabilities have become an important feature for interacting with customers. The current research examines personal disclosures that occur during consumer interactions with AI and humans in service settings. We found that consumers’ lay beliefs about AI (i.e., a perceived lack of social judgment capability) lead to enhanced disclosure of sensitive personal information to AI (vs. humans). We identify boundaries for this effect such that consumers prefer disclosure to humans over AI in (i) contexts where social support (rather than social judgment) is expected and (ii) contexts where sensitive information will be curated by the agent for social dissemination. In addition, we reveal underlying psychological processes such that the motivation to avoid negative social judgment favors disclosing to AI whereas seeking emotional support favors disclosing to humans. Moreover, we reveal that adding humanlike factors to AI can increase consumer fear of social judgment (reducing disclosure in contexts of social risk) while simultaneously increasing perceived AI capacity for empathy (increasing disclosure in contexts of social support). Taken together, these findings provide theoretical and practical insights into tradeoffs between utilizing AI versus human agents in service contexts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call