Abstract

Artificial Intelligence is increasingly becoming integrated in many aspects of human life. One particular AI comes in the form of conversational agents (CAs) such as Siri, Alexa, and chatbots used for customer service on websites and other information systems. It is widely accepted that humans treat systems as social actors. Leveraging this bias, companies sometimes attempt to masquerade a CA as a human customer service representative. In addition to the ethical and legal questions around this practice, the benefits and drawbacks of a CA pretending to be human are unclear due to a lack of study. While more human-like interactions can improve outcomes, when users find out that the CA is not human, they may have a negative reaction that may cause reputation harm in the company. In this research we use Expectation Violation Theory to explain what happens when users have high or low expectations of a conversation. We conducted an experiment with 175 participants where some participants were told they were interacting with a CA while others were told they were interacting with a human. We further divided the groups so that some participants interacted with a CA with low conversational capability while others interacted with a CA with high conversational capability. The results show that expectations formed by the user before the interaction change how the user evaluates the CA beyond the actual performance of the CA. These findings provide guidance to developers not just of conversational agents, but also for other technologies where users may be uncertain of a system's capabilities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call