Abstract

This paper proposes a new infrared and visible image fusion method based on the densely connected disentangled representation generative adversarial network (DCDR-GAN), which strips the content and the modal features of infrared and visible images through disentangled representation (DR) and fuses them separately. To deal with the mutually exclusive features in infrared and visible images, inject the modal features into the reconstruction of content features through adaptive instance normalization (AdaIN), reducing the interference. To reduce feature loss and ensure the expression of all-level features in the fused image, DCDR-GAN designs the densely connected content encoders and fusion decoder and constructs the multi-scale fusion structures between the enc-dec through long connections. Meanwhile, the content and the modal reconstruction losses are proposed to preserve the information of the source images. Finally, through the two-phase trained model, generate the fused image. The subjective and objective evaluation results of the TNO and INO datasets show that the proposed method has better visual effects and higher index values than other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call