Abstract

The sparsity-driven technique is a widely used tool to solve the synthetic aperture radar (SAR) imaging problem. However, it always encounters sensitivity to motion errors. To solve this problem, this article proposes a new deep neural network architecture, i.e., the sparse autoencoder network (SAE-Net). The proposed SAE-Net is designed to implement SAR imaging and autofocus simultaneously. In SAE-Net, the encoder transforms the SAR echo into an imaging result, and the decoder regenerates the SAR echo using the obtained imaging result. The encoder is designed by the unfolded alternating direction method of multipliers (ADMM), while the decoder is formulated into a linear mapping. The joint reconstruction loss and the entropy loss are utilized to guide the training of the SAE-Net. Notably, the algorithm operates in a totally self-supervised form and requires no other training dataset. The methodology was tested on both synthetic and real SAR data. These tests show that the proposed architecture outperforms other state-of-the-art autofocus methods in sparsity-driven SAR imaging applications.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.