Abstract

Despite the growing interest in artificial intelligence (AI) technology in the retail and service industry, consumer research on AI especially virtual agents (VAs) has been underexplored. To fill the void, this study investigates how consumers build relationships with VAs through the lens of trust. Due to its unique characteristics (e.g., disembodied representation, interactive capabilities), VAs differ from other technologies in how trust is developed. Drawn from the “computers as social actors” (CASA) paradigm and the extended Technology Acceptance Model (TAM), we proposed and empirically tested the consumer-VA trust model, in which trust serves as a second-order construct with three first-order dimensions (i.e., competence, integrity, and self-efficacy). In addition, the relationships among consumer-VA trust, consumer perceptions, and behavioral intention were examined. Using a survey with 192 usable responses, our research indicated that consumer-VA trust positively impacts perceived usefulness and perceived enjoyment, which in turn increase consumers’ intention to continue use of VAs. This research provides theoretical implications on consumer adoption of VAs and practical implications for marketing strategies for this new technology.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.