Abstract

EXplainable Artificial Intelligence ( XAI) methods have gained much momentum lately given their ability to shed the light on the decision function of opaque machine learning models. There are two dominating XAI paradigms: feature attribution and counterfactual explanation methods. While the first family of methods explains why the model made a decision, counterfactual methods aim at answering what-if the input is slightly different and results in another classification decision. Most of the research efforts have focused on answering the why question for time series data modality. In this paper, we aim at answering the what-if question by finding a good balance between a set of desirable counterfactual explanation properties. We propose Shapelet-guided Counterfactual Explanation (SG-CF), a novel optimization-based model that generates interpretable, intuitive post-hoc counterfactual explanations of time series classification models that balance validity, proximity, sparsity, and contiguity. Our experimental results on nine real-world time-series datasets show that our proposed method can generate counterfactual explanations that balance all the desirable counterfactual properties in comparison with other competing baselines.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.