Abstract

Artificial intelligence (AI) recommendations are becoming increasingly prevalent, but consumers are often reluctant to trust them, in part due to the “black-box” nature of algorithm-facilitated recommendation agents. Despite the acknowledgment of the vital role of interpretability in consumer trust in AI recommendations, it remains unclear how to effectively increase interpretability perceptions and consequently enhance positive consumer responses. The current research addresses this issue by investigating the effects of the presence and type of post hoc explanations in boosting positive consumer responses to AI recommendations in different decision-making domains. Across four studies, the authors demonstrate that the presence of post hoc explanations increases interpretability perceptions, which in turn fosters positive consumer responses (e.g., trust, purchase intention, and click-through) to AI recommendations. Moreover, they show that the facilitating effect of post hoc explanations is stronger in the utilitarian (vs. hedonic) decision-making domain. Further, explanation type modulates the effectiveness of post hoc explanations such that attribute-based explanations are more effective in enhancing trust in the utilitarian decision-making domain, whereas user-based explanations are more effective in the hedonic decision-making domain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call