Abstract

ABSTRACT Radar forward-looking imaging has gained increasingly significance in diverse applications, including battlefield reconnaissance, target surveillance, and precision guidance. While Synthetic Aperture Radar techniques are widely employed to achieve high azimuth resolution, the domain of forward-looking imaging has its challenges. These challenges arise due to the minimal Doppler frequency changes in forward-looking areas and the symmetrical Doppler history about flight path. These constraints negatively impact the radar’s ability to accurately detect and image targets in forward-looking areas. Consequently, there has been a growing interest in the development of novel approaches to these problems. Due to the application of deep learning, image super-resolution methods based on convolutional neural networks (CNNs) have received extensive attention and demonstrated good performance. This avenue of research provides fresh insights into improving the azimuth resolution of radar imaging. In this letter, we propose a novel convolutional neural network, called MSUS-Net (Multi-scale U-shaped Network), for radar forward-looking imaging. It combines asymmetric feature fusion with a unique connection structure to replace traditional layers. Moreover, different from conventional image restoration networks that use content loss, we use the L1 distance in the frequency domain, restoring missing high-frequency details. Simulated and real radar data validate that this approach offers robust recovery and better imaging performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call