Abstract

Deep learning-based methods have demonstrated exceptional performance in the field of synthetic aperture radar automatic target recognition (SAR ATR). However, obtaining a sufficient number of labeled SAR images remains a significant challenge that can negatively affect the performance of these methods. This is because most deep learning models use the entire target image as input. However, but research has shown that with limited training data, the model may not be able to capture discriminative regions of the image. Instead, they might focus on more useless even harmful image regions for recognition, leading to poor recognition results. In this study, we propose a novel SAR ATR framework that addresses the limitation of limited training data. The proposed framework primarily comprises a global assisted branch, locally enhanced branch, feature capture module, and feature discrimination module. The global-assisted branch conducts an initial recognition and provides a loss based on the entire SAR image during each training epoch and the feature capture module automatically segments and captures crucial image regions for current recognition, which we refer to as the “golden key” of the image. The local enhanced branch then performs another recognition and provides another loss function based on the image parts. Instead of updating the model with two basic recognition losses to roughly search for and capture crucial image parts, a feature discrimination module is proposed to combine the global and local branches in a subtle manner to improve local feature separability or compactness for similar inter-class or dissimilar inner-class sample pairs in global features. This adaptively forced the model to capture more crucial image parts and extract more effective features. Experimental results and comparisons of the MSTAR and OpenSARship datasets indicated that the proposed method achieves superior recognition performance compared to existing methods. The effectiveness of our method was further demonstrated through the visualization of the golden key of the testing images and the recognition performance in ablation experiments. We will release our codes and more experimental results at https://github.com/cwwangSARATR/SARATR_FeaCapture_Discrimination.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.