Abstract

After the successful application of deep learning to image classification and speech recognition in recent years, deep neural networks are now also widely used for the recognition of targets in Synthetic Aperture Radar (SAR) images. For the 10-class MSTAR SAR Automatic Target Recognition (ATR) problem, several papers have described Convolutional Neural Networks (CNNs) with a classification accuracy that is on a par or higher than conventional target recognition techniques. However, these papers do not show which part of the SAR image is being used by the CNN to classify the target. This paper explains through the visualization of a saliency map that a CNN can achieve a high classification score by using the similarity of the clutter in the SAR images in the training and test set. In this paper, the saliency map is computed from the trained CNN with the Gradient-weighted Class Activation Mapping (Grad-CAM) technique. This paper also shows that by first segmenting the SAR image in target, shadow and clutter regions, and then only providing the target region of the SAR image to the CNN, the problem of clutter-influenced target classification can be mitigated at the expense of a small reduction in classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call