Abstract

The objective of this paper is to develop and empirically validate a conceptual model that explains individuals' behavioral intention to accept AI-based recommendations as a function of attitude toward AI, trust, perceived accuracy and uncertainty level. The conceptual model was tested through a between-participants experiment using a simulated AI-enabled investment recommendation system. A total of 368 participants were randomly and evenly assigned to one of the two experimental conditions, one depicting low-uncertainty investment recommendation involving blue-chip stocks while the other depicting high-uncertainty investment recommendation involving penny stocks. Results show that attitude toward AI was positively associated with behavioral intention to accept AI-based recommendations, trust in AI, and perceived accuracy of AI. Furthermore, uncertainty level moderated how attitude, trust and perceived accuracy varied with behavioral intention to accept AI-based recommendations. When uncertainty was low, a favorable attitude toward AI seemed sufficient to promote reliance on automation. However, when uncertainty was high, a favorable attitude toward AI was a necessary but no longer sufficient condition for AI acceptance. Thus, the paper contributes to the human-AI interaction literature by not only shedding light on the underlying psychological mechanism of how users decide to accept AI-enabled advice but also adding to the scholarly understanding of AI recommendation systems in tasks that call for intuition in high involvement services.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call