Abstract

The scattering signatures of a synthetic aperture radar (SAR) target image will be highly sensitive to different azimuth angles/poses, which aggravates the demand for training samples in learning-based SAR image automatic target recognition (ATR) algorithms, and makes SAR ATR a more challenging task. This paper develops a novel rotation awareness-based learning framework termed RotANet for SAR ATR under the condition of limited training samples. First, we propose an encoding scheme to characterize the rotational pattern of pose variations among intra-class targets. These targets will constitute several ordered sequences with different rotational patterns via permutations. By further exploiting the intrinsic relation constraints among these sequences as the supervision, we develop a novel self-supervised task which makes RotANet learn to predict the rotational pattern of a baseline sequence and then autonomously generalize this ability to the others without external supervision. Therefore, this task essentially contains a learning and self-validation process to achieve human-like rotation awareness, and it serves as a task-induced prior to regularize the learned feature domain of RotANet in conjunction with an individual target recognition task to improve the generalization ability of the features. Extensive experiments on moving and stationary target acquisition and recognition benchmark database demonstrate the effectiveness of our proposed framework. Compared with other state-of-the-art SAR ATR algorithms, RotANet will remarkably improve the recognition accuracy especially in the case of very limited training samples without performing any other data augmentation strategy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call