Abstract

• PASTLE exploits an instance space transformation to explain any model’s predictions. • PASTLE is useful for different kinds of audience enhancing their trust in the model. • It outputs the changes to be made in order to increase prediction probabilities. • The proposed space transformation has been evaluated on various real-world data sets. • A user study reveals promising results in terms of effective explainability. During the last decade, more and more Artificial Intelligence systems have been designed using complex and sophisticated architectures to reach unprecedented predictive performance. The side effect is an increase in opacity of their inner workings which is inadmissible when such systems are applied in critical domains (healthcare, finance and so on). The eXplainable AI (XAI) research field aims to overcome this limitation thus helping humans to understand black-box decisions. In this paper we propose a novel model-agnostic XAI technique, named Pivot-Aided Space Transformation for Local Explanations (PASTLE), which exploits an instance-space transformation to explain any model’s predictions, aiming to enhance human trust towards the AI decisions. We experimentally evaluate the effects of the introduced space transformation on various real-world data sets and our user study reveals promising results in terms of effective explainability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.