Abstract

Deep learning-based synthetic aperture radar (SAR) automatic target recognition (ATR) algorithms have achieved outstanding performance under the condition of hundreds or thousands of training samples in recent years. Nevertheless, it is often rare to acquire great quantities of target samples in real SAR application scenarios. This article proposes a novel ATR method called transductive prototypical attention reasoning network (TPARN) to solve the problem of SAR target recognition with only a few training samples. To be specific, a region awareness-based feature extraction model is first developed, which can effectively focus on the target region of interest and suppress the background clutter by embedding direction-aware and position-sensitive information to extract more transferable knowledge. To heighten the discrimination of the sample features, a cross-feature spatial attention module is then proposed following the feature embedding model. Finally, a transductive prototype reasoning method is presented to realize the identity reasoning of the target, which can continuously update each class prototype with training samples and test samples together, thereby improving the classification accuracy. In addition, a marginal adaptive hybrid loss is proposed to obtain a discriminative feature embedding space with intra-class compactness and inter-class divergence, aiming to facilitate subsequent target identity reasoning. Extensive experiments on the moving and stationary target acquisition and recognition (MSTAR) benchmark dataset reveal that the proposed method outperforms some state-of-the-arts under different few-shot SAR ATR tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.