Abstract

To improve the life-detection radar resolution under certain hardware conditions, in this letter, a deep mutual learning generative adversarial network model (Deep Mutual GAN) is proposed. In the proposed model, the generator can improve the angular resolution of the input low-resolution radar image by five times, which is enough to meet our requirements for the resolution of life detection. We innovatively use two generators in GAN with the same network structure and make the two generators learn from each other. In this way, the learning process of a generator is not only achieved by its confrontation with the discriminator but also guided by another generator. As a result, the knowledge of the generator is no longer only obtained through its own learning; each generator learns knowledge from another generator while learning knowledge by itself. The proposed model can effectively make the convergence of GAN more stable and improves the super resolution effect. We also introduce the details of the network structure of generator and discriminator, in which residual learning and a symmetrical network structure are applied. The experimental results show that the proposed method can achieve state-of-the-art imaging effect, which is meaningful for subsequent target detection and recognition.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.