Abstract

Although AI's development is remarkable, end users do not know how the AI has come to a specific conclusion due to the black-box nature of AI algorithms like deep learning. This has given rise to the field of explainable AI (XAI) where techniques are being developed to explain AI algorithms. One such technique is called Local Interpretable Model-Agnostic Explanations (LIME). LIME is popular because it is modelagnostic and works well with text, tabular and image data. While it has some good features, there are still room for improvements towards the original LIME algorithm especially it's stability. In this work, the LIME stability is being reviewed and three different approaches were investigated for its effectiveness in stability improvement which are; 1) using high sample size for stable ordering, 2) using an averaging method to reduce region flipping; and 3) to evaluate different super-pixels segmentation algorithms in generating stable LIME outcome. The experiment results shows a definite increase in the stability of the improved LIME compared to the baseline LIME and thus the reliability of using it practically.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call