Abstract

Chatbots have been widely adopted to support online customer service and supplement human agents. However, online data transmission may involve privacy issues and arouse users’ privacy concerns. In order to understand the privacy management mechanism when interacting with chatbots and human agents, we designed a cross-national comparative study and conducted online experiments in China and the United States based on Communication Privacy Management (CPM) theory. The results show that privacy concerns and boundary linkage played different mediation roles between agent identity and the intention to disclose as well as the intention to use the service. Information sensitivity had a significant moderating effect on the mechanism. Our research contributes to a better understanding of personal boundary management in the context of human-machine interaction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call