Abstract

In recent years, Convolutional Neural Networks (CNN s) have been widely applied in various applications due to its powerful learning capability. However, its lack of explainability hinders its further usage in tasks requiring high reliability. Therefore, interpretability technique is the key to the application and deployment of CNN models. As a typical interpretability technique for CNN, Class Activation Map (CAM) utilizing the gradient based weights and activation map is widely applied to traditional CNN models for offering visual interpretability. However, the activation map adopted by CAM cannot loyally quantify the relevance between input samples and activation values. Hence, in this paper, we propose a new interpretability approach called Salience-CAM employing salience scores to accurately measure the relevance between input samples and activation values. To evaluate the effectiveness of Salience-CAM, comprehensive experiments are conducted on 6 selected time series datasets. By leveraging an evaluation algorithm proposed in this paper, the experimental results show that our proposed Salience-CAM outperforms the baseline by discovering more discriminative features.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call