Abstract

For predictive analysis and automatic classification, Deep Neural Networks (DNNs) are investigated and visualized. All the DNNs used for Automatic Target Recognition (ATR) have inbuilt feature extraction and classification abilities, but the inner working gets more opaque rendering them a black box as the networks get deeper and more complex. The main goal of this paper is to get a glimpse of what the network perceives in order to classify Moving and Stationary Target Acquisition and Recognition (MSTAR) targets. However, past works have shown that classification of targets was performed solely based on clutter within the MSTAR data. Here we show that the DNN trained on the MSTAR dataset classifies only based on target information and the clutter plays no role in it. To demonstrate this, heatmaps are generated using the Gradient-weighted Class Activation Mapping (Grad-CAM) method to highlight the areas of attention in each input Synthetic Aperture Radar (SAR) image. To further probe into the interpretability of classifiers, reliable post hoc explanation techniques are used such as Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) to approximate the behaviour of a black box by extracting relationships between feature value and prediction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.