Abstract

In this paper, the twin-systems approach is reviewed, implemented, and competitively tested as a post-hoc explanation-by-example solution to the eXplainable Artificial Intelligence (XAI) problem. In twin-systems, an opaque artificial neural network (ANN) is explained by “twinning” it with a more interpretable case-based reasoning (CBR) system, by mapping the feature weights from the former to the latter. Extensive comparative tests are performed, over four experiments, to determine the optimal feature-weighting method for such twin-systems. Twin-systems for traditional multilayer perceptron (MLP) networks (MLP–CBR twins), convolutional neural networks (CNNs; CNN–CBR twins), and transformers for NLP (BERT–CBR twins) are examined. In addition, Feature Activation Maps (FAMs) are explored to enhance explainability by providing an additional layer of explanatory insight. The wider implications of this research on XAI is discussed, and a code library is provided to ease replicability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.