Abstract
The rise of Explainable Artificial Intelligence (XAI) has been a game changer for the growth of Artificial Intelligence (AI) powered systems. By providing human-level explanations, it systematically solves the most significant issue that AI faces: the black-box paradox realized from the complex hidden layers of deep and machine learning that powers it. Fundamentally, it allows users to learn how the AI operates and comes to decisions, thus enabling cognitive calibration of trust and subsequent reliance on the system. This conclusion has been supported by various research under different contexts and has motivated the development of newer XAI techniques. However, as human-computer interaction and social science studies suggest, these findings might be limited as the emotional component, which is also established from the interaction, was not considered. Emotions have long been determined to play a dominant role in decision-making as they can rapidly and unconsciously be infused in judgments. This insinuates an idea that XAI might facilitate trust calibration not solely because of the cognitive information it provides but of the emotions developed on the explanations. Considering this idea has not been explored, this study aims to examine the effects of emotions associated with the interaction with XAI towards trust, reliance, and explanation satisfaction. One hundred twenty-three participants were invited to partake in an online experiment anchored in an image classification testbed. The premise was that they were hired to classify different species of animals and plants, with an XAI-equipped image classification AI available to give them recommendations. At the end of each trial, they were tasked to rate their emotions upon interaction with the XAI, trust in the system, and satisfaction with the explanation. Reliance was measured based on whether they accepted the recommendations of AI. Results show that users who felt surprisingly happy and trusting emotions reported high trust, reliance, and satisfaction. On the other hand, users that developed fearfully dismayed and anxiously suspicious emotions have a significant negative relationship with satisfaction. Essentially, as supported by the post-interview, the study surfaced three critical findings on the affective functionality of XAI. First, emotions developed are mainly attributed to the design and overall composition rather than the information it carries. Second, trust and reliance can only be developed from positive emotions. Users might not trust and rely on an AI system even if it has a meaningful explanation if it develops negative emotions to the user. Third, explanation satisfaction can be triggered by both positive and negative emotions. The former is mainly from the presentation of XAI, while the latter is because of understanding the limitation of the AI.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.