Abstract

Artificial intelligence assistants (AIAs) such as Alexa are prevalent in consumers’ homes. Owing to their powerful artificial intelligence, consumers may perceive that AIAs have a mind of their own, that is, they anthropomorphize them. Past marketing research points to beneficial effects of AIA anthropomorphism for consumers and companies, while potential harmful effects have not been empirically explored. In examining both beneficial and harmful effects, this paper adopts a relationship perspective. Indeed, consumers spend large amounts of time with their AIAs, potentially developing a relationship over time that builds on an exchange of benefits and (psychological) costs. A preliminary survey and user interviews, a field study and a field experiment with AIA users show that AIA anthropomorphism may threaten users’ identity, which disempowers them, creates data privacy concerns and ultimately undermines their well-being. These harmful effects particularly emerge in close, long relationships. The field experiment uncovers three empowering interventions which attenuate harmful effects of AIA anthropomorphism in relationships with consumers. With AI-powered technologies taking larger roles in our daily lives, our research highlights key future directions to investigate the permanent ongoing nature of the consumer–AI relationships.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.