Abstract

For the past few years, the barrier of explainability accompanying by deep neural networks (DNNs) has been increasingly studied. The methods based on class activation map (CAM) which interpret the model decision by mapping the output back to the input space, have achieved a notable momentum among the research. However, the CAM-based methods cannot stably produce effective explanation results on remote sensing images (RSIs), owing to the coarse location map generated by high-level features, whereas, the RSIs contain abundant detailed spatial information and multi-scale objects. To address this issue, this article proposes class activation map weighted with channel saliency and gradient (CSG-CAM) to enhance the low-level features in saliency map. To do this, we firstly introduce the idea of dynamic channel pruning and propose the channel saliency to describe the channel importance of specific layer. Then the channel saliency, instead of gradient, is considered as the neuron importance weights to calculate the saliency map on shallow layer. Furthermore, the channel saliency also participates in the neuron importance weights of final layer, jointly with a gradient weighted combination of the positive partial derivatives. Finally, the saliency map of proposed CSG-CAM is fused by the explanation heat maps from shallow and final layer of networks. The sufficient experimental results on two publicly available datasets designed for RSI scene classification and three practical networks demonstrate the effectiveness of the CSG-CAM in terms of both faithfulness and explainability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call