Abstract

AbstractArtificial intelligence (AI) virtual assistants are rapidly growing, permeating people's daily lives and work. However, some trust and risk issues prevent the acceptance and use of AI virtual assistants by users. Thus, understanding the roles of trust and perceived risk in user acceptance of AI virtual assistants is crucial. This study develops a comprehensive research model based on unified theory of acceptance and use of technology (UTAUT) to explain user acceptance of AI virtual assistants. This model extends UTAUT by adding users' perception of trust and risk. The research model and hypotheses are validated through structural equation modeling with a sample of 926 AI virtual assistant users. Results show that gender is significantly related to behavioral intention to use, education is positively related to trust and behavioral intention to use, and usage experience is positively related to attitude toward using. UTAUT variables, including performance expectancy, effort expectancy, social influence, and facilitating conditions, are positively related to behavioral intention to use AI virtual assistant. Trust and perceived risk respectively have positive and negative effects on attitude toward using and behavioral intention to use AI virtual assistants. Trust and perceived risk play equally important roles in explaining user acceptance of AI virtual assistants. Theoretical and practical implications of the current AI virtual assistant acceptance model and directions for future research are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call