Abstract

The sparse imaging method based on compressed sensing (CS) is widely used in the field of millimeter-wave (MMW) synthetic aperture radar (SAR) imaging. However, 3D sparse imaging is limited by the difficult parameter tuning, the huge computational load, and the low processing efficiency. In addition, due to the motion errors and model mismatch, it is difficult to obtain well-focused results without error correction techniques. To address these issues, we propose a deep learning framework that integrates 3D sparse imaging and autofocusing, named 3D Sparse Autofocusing Network (SAF-3DNet) for MMW SAR data processing. The network is constructed based on an auto-encoder, which can optimize parameters without effective ground truth. The backbone structure of the encoder is expanded by approximate message-passing (AMP), and the operators in the frequency domain are used to replace the traditional matrix-vector CS model, which avoids large-scale matrix multiplication and other operations, and greatly improves the operation efficiency. In addition, the 2D phase error estimation in the cross-range plane is embedded into the sparse imaging models, enabling simultaneous 3D imaging and autofocusing. The decoder is designed as a mapping from the autofocusing results to the echo data. Experimental results based on both simulated and measured data demonstrate the proposed SAF-3DNet can achieve well-focused 3D reconstruction within an ephemeral time, which expresses the potential of 3D MMW SAR real-time and high-quality imaging.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call