Abstract

Accurately detecting ground armored weapons is crucial for achieving initiative advantages in military operations. Generally, satellite or airborne synthetic aperture radar (SAR) systems face limitations due to their revisit cycles and fixed flight trajectories, resulting in single-view imaging of targets, thereby hampering the recognition of small SAR ground targets. In contrast, MiniSAR possesses the capability to capture the multi-view of a target by acquiring images from different azimuth angles. In this research, our team utilizes a self-developed MiniSAR system to generate multi-view SAR images of real ground armored targets and recognize targets. However, the recognition of small targets in SAR images encounters two significant difficulties. First, small targets in SAR images are prone to interference from background noise. Second, SAR target deformation arises from variations in depression angles and imaging processes. To tackle these difficulties, this paper proposes a novel SAR ground deformation target recognition approach based on a joint multi-view transformer model. The method first preprocesses SAR images based on a low-frequency priori SAR image denoising method. Next, it obtains multi-view joint information through a self-attentive mechanism, inputs joint features to the transformer structure. The outputs are jointly updated by a multi-way averaging adaptive loss function to improve the recognition accuracy of deformed targets. The experimental results demonstrate the superiority of the proposed method in SAR ground deformation target recognition, outperforming other representative approaches such as information fusion of target and shadow (IFTS) and Vision Transformer (ViT). It is concluded that the proposed method has high recognition accuracies of 98.37% and 93.86 % on the moving and stationary target acquisition and recognition (Mstar) and our SAR images dataset, respectively, in the field of SAR ground deformation target recognition. We have included links to the code and data in the abstract of this paper for ease of access. The source code and sample dataset are available at https://github.com/Lvjiming/MJT.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call