Abstract

ABSTRACT Due to the “black-box’ nature of artificial intelligence (AI) recommendations, interpretability is critical to the consumer experience of human-AI interaction. Unfortunately, improving the interpretability of AI recommendations is technically challenging and costly. Therefore, there is an urgent need for the industry to identify when the interpretability of AI recommendations is more likely to be needed. This study defines the construct of Need for Interpretability (NFI) of AI recommendations and empirically tests consumers’ need for interpretability of AI recommendations in different decision-making domains. Across two experimental studies, we demonstrate that consumers do indeed have a need for interpretability toward AI recommendations, and that the need for interpretability is higher in utilitarian domains than in hedonic domains. This study would help companies to identify the varying need for interpretability of AI recommendations in different application scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call