Abstract

This study proposes a model-driven deep network based on the linear alternating direction method of multipliers (L-ADMM), to solve the problem whereby the inverse synthetic aperture radar (ISAR) generates defocused images of targets exhibiting micro-motion with sparse aperture. The network unfolds the operation process of L-ADMM into a model-driven deep network, and automatically optimizes the parameters of the network through learning instead of manually adjusting the parameters, which can better obtain images. Analyses of data acquired through simulations and experimental measurements were used to compare the results of imaging obtained by L-ADMM-net with those of the range Doppler (R-D) algorithm, chirplet algorithm, and L-ADMM. The entropy of images obtained by L-ADMM-net was the lowest, and their image contrast and resolution were the highest. Moreover, L-ADMM-net can generate high-resolution images of targets exhibiting micro-motion with sparse aperture at a low signal-to-noise ratio (SNR), which verifies its robustness. It can also automatically update and adjust parameters more stably than L-ADMM. The proposed method significantly improves the resolution, robustness, and stability of images of targets exhibiting micro-motion in different situations compared with traditional methods, and can provide technical support for target recognition in the future.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call