Abstract

Deep-learning-based target recognition in synthetic aperture radar (SAR) images has been actively studied in recent years. However, it is very costly to collect large numbers of labeled SAR images, especially measured SAR target images of various classes, to train high-performance classification networks. To solve the problem of insufficient SAR data, electromagnetic computational tools have often been developed and used to synthesize the measured SAR target images from data modeling. However, despite the use of sophisticated SAR image modeling, there is a large domain gap between synthetic SAR images and measured images such that networks trained with synthetic SAR images tend to show poor classification performance when tested on measured SAR target images. In this paper, we propose a novel transformer-based synthetic-to-measured SAR target image translation network, referred to as SAR-SMT Net, to bridge the gap between synthetic and measured SAR target images. SAR-SMT Net takes synthetic SAR target images as input and estimates the latent representational features of their corresponding measured SAR images to faithfully adjust the global context and scattering characteristics of the input synthetic SAR target images to the corresponding measured SAR values. In addition, we propose five challenging experimental scenarios that can validate SAR image translation performance outcomes. Experimentally, SAR-SMT Net as proposed here outperforms previous state-of-the-art methods in the experiment scenarios, demonstrating feasible generalization ability when used to translate synthetic SAR target images into their corresponding measured SAR target images with a high level of fidelity, even for unseen target classes at unseen azimuth angles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call