Abstract

Despite recent advances in the field of explainable artificial intelligence systems, a concrete quantitative measure for evaluating the usability of such systems is nonexistent. Ensuring the success of an explanatory interface in interacting with users requires a cyclic, symbiotic relationship between human and artificial intelligence. We, therefore, propose explanatory efficacy, a novel metric for evaluating the strength of the cyclic relationship the interface exhibits. Furthermore, in a user study, we evaluated the perceived affect and workload and recorded the EEG signals of our participants as they interacted with our custom-built, iterative explanatory interface to build personalized recommendation systems. We found that systems for perceptually driven iterative tasks with greater explanatory efficacy are characterized by statistically significant hemispheric differences in neural signals with 62.4% accuracy, indicating the feasibility of neural correlates as a measure of explanatory efficacy. These findings are beneficial for researchers who aim to study the circular ecosystem of the human-artificial intelligence partnership.

Highlights

  • Recent advances in artificial intelligence (AI) and machine learning algorithms have resulted in models that achieve high predictive performance and provide explanatory features to support their decisions, increasing model interpretability and transparency in real-world environments [1].merely providing explanations is insufficient

  • Our findings indicate that the explanatory efficacy of an interface can be evaluated with EEG signals associated with human affect and workload

  • POTENTIALITY AND LIMITATION Through answering the three questions, we have found that the EEG signals correlated to explanatory efficacy of users who were able to improve their understanding by providing explnatory, interactive feedback differ from those of users who were only able to provide one-way explanation

Read more

Summary

INTRODUCTION

Recent advances in artificial intelligence (AI) and machine learning algorithms have resulted in models that achieve high predictive performance and provide explanatory features to support their decisions, increasing model interpretability and transparency in real-world environments [1]. With explanatory efficacy in mind, we further explore the potentiality of neural correlates in EEG signals as a measure of explanatory efficacy unlike previous works that have studied the physiological effect of user’s behaviors on interface technologies. Toward this end, we investigated three research questions: 1) Feasibility (Q1): Can the explanatory efficacy of an interactive XAI system’s recommendation be improved by feedback (a user correcting what they perceive to be the system’s flawed reasoning)?. We observed that the physiological characteristics of EEG signals correlate with human affect and workload in perceptually driven iterative tasks and that an increased explanatory efficacy can lead to improvements in the model’s ability to predict personalized results

PROBLEM STATEMENT
EEG FOR AFFECTIVE-COGNITIVE EVALUATION
XAI-BASED RECOMMENDATION SYSTEM
USER STUDY
MEASURE OF EXPLANATORY EFFICACY
PARTICIPANT SELF-ASSESSMENT
FEASIBILITY OF EXPLANATORY EFFICACY
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call