Abstract

The complexity of the emotional presentation of users to Artificial Intelligence (AI) virtual assistants is mainly manifested in user motivation and social emotion, but the current research lacks an effective conversion path from emotion to acceptance. This paper innovatively cuts from the perspective of trust, establishes an AI virtual assistant acceptance model, conducts an empirical study based on the survey data from 240 questionnaires, and uses multilevel regression analysis and the bootstrap method to analyze the data. The results showed that functionality and social emotions had a significant effect on trust, where perceived humanity showed an inverted U relationship on trust, and trust mediated the relationship between both functionality and social emotions and acceptance. The findings explain the emotional complexity of users toward AI virtual assistants and extend the transformation path of technology acceptance from the trust perspective, which has implications for the development and design of AI applications.

Highlights

  • With the advancement of AI technology, there are increasing numbers of Artificial Intelligence (AI) applications, such as service robots, chatbots, and AI virtual assistants (Gummerus et al, 2019)

  • This paper develops an AI virtual assistant acceptance model based on the technology acceptance model and the service robot acceptance model

  • The AI virtual assistants (AVA) model extends the potential path of AVA and improves the study of the acceptance transformation mechanism from the trust perspective

Read more

Summary

INTRODUCTION

With the advancement of AI technology, there are increasing numbers of Artificial Intelligence (AI) applications, such as service robots, chatbots, and AI virtual assistants (Gummerus et al, 2019). H2b: Perceived ease of use is positively correlated with users’ behavior of trusting AI virtual assistants. The paper makes the following assumptions: H3a: Perceived humanity has an inverted U-shaped relationship with user trust in artificial intelligence virtual assistants. The following assumptions are made in this paper: H3b: Perceived social interactivity is positively correlated with user trust in artificial intelligence virtual assistants. The regression coefficient value for perceived of ease of use is 0.219 and shows significance (t = 2.772, p = 0.006 < 0.01), which implies that perceived of ease of use has a significant positive relationship with trust. The regression coefficient value of perceived social presence was 0.206 and showed significance (t = 4.174, p = 0.000 < 0.01), implying that perceived social presence will have a significant positive influence on Trust. Control variables Explanatory variables Model explanatory degree *p < 0.05 **p < 0.01

F Value R2
DISCUSSION
Findings
ETHICS STATEMENT
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call