Abstract
This research, grounded in privacy calculus theory, examines how the anthropomorphization of AI agents affects consumers’ perceptions of privacy risks associated with personalized ads. Specifically, it explores strategies to reduce potential negative impacts. In Study 1, participants expressed concerns that highly anthropomorphized chatbots might possess human-like autonomous intentions to misuse personal data, a phenomenon referred to as the ‘uncanny valley of mind’. In contrast, participants felt more secure, in control, and less concerned about privacy when interacting with a mechanized, less human-like chatbot. To address this backfiring effect, Study 2 explored the role of algorithmic disclosure – where companies provide transparent information about the underlying algorithms, data handling procedures, and personalization criteria. This strategy effectively mitigated privacy concerns, thereby preventing the negative effects associated with highly anthropomorphized AI chatbots. These findings offer valuable insights for marketers utilizing AI chatbots to craft effective, personalized messages based on social media data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.