Abstract

Artificial intelligence (AI) has become part of our everyday lives, and its presence and influence are expected to grow exponentially. Regardless of its expanding impact, the perplexing algorithms and processes that drive AI's decision and output can lead to decreased trust, and thus impede the adoption of future AI services. Explainable AI (XAI) in recommender systems has surfaced as a solution that can help users understand how and why an AI recommended a specific product or service. However, there is no standardized explanation method that satisfies users' preferences and needs. Therefore, the main objective of this study is to explore a unified explanation method that centers around human perspective. This study examines the preference for AI interfaces by investigating the components of user-centered explainability, including scope (global and local) and format (text and visualization). A mixed logit model is used to analyze data collected by a conjoint survey. Results show that local explanation and visualization are preferred, and users dislike lengthy textual interfaces. Our findings incorporate the extraction of monetary value from each attribute.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call