Abstract

Advances in machine learning and natural language processing have driven the growing popularity of virtual conversational agents (VCAs). This anthropomorphic communication approach relies on user information sharing and real-time feedback from VCAs, and has raised privacy concerns while affecting various social interactions and relationships. Previous research on reducing user privacy concerns has mainly focused on user information mining, sensitive user information requests and privacy policies, while little is known about the anthropomorphic roles of partners and servants at the human-machine social hierarchy level. Therefore, this study, based on human-computer interaction (service) anthropomorphism at social level, develops a framework to investigate the impact of information sensitivity and VCAs' anthropomorphic roles, including partner and servant, on users' privacy concerns, as well as the mediating effects of competence- and integrity-based trust. The results show that when highly sensitive information is requested, user privacy concerns are greater for a partner VCA than a servant VCA, and vice-versa. Meanwhile, when a VCA requests highly sensitive information, integrity-based trust mediates the relationship between servant VCAs and privacy concerns, and when a VCA requests low-sensitivity information, competence-based trust mediates the same relationship. These insights provide actionable implications for managers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call