Abstract

For deep convolutional neural networks (deep CNNs), a severe drawback is the poor interpretability. To address this drawback, this paper proposes a novel genetic algorithm-based method for the first time to automatically evolve local interpretable explanations that can assist users to decide whether to trust the predictions of deep CNNs. In the experiments, the results show that the evolved explanations can explain the predictions of deep CNNs on images by successfully capturing meaningful interpretable features.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call