Abstract

Due to the discrepancy between training and test data, autoencoder-based image fusion methods trained by natural images easily lose vital information of infrared and visible images. Generative adversarial network (GAN) based methods usually conduct their adversarial learning in image domain, which are difficult to optimize and ultimately affect the image fusion performance. To address these problems, this paper proposes an autoencoder-based network, Extraction-and-Reconstruction Network (ERNet), to fuse infrared and visible images. To weaken the discrepancy problem, ERNet’s encoder is trained using infrared and visible images. To stably train the encoder, we conduct adversarial learning in the feature domain of infrared and visible images, which makes the well-trained encoder could efficiently extract vital features from them. ERNet’s decoder is trained using natural images with a supervised mode, which could reconstruct the final fused image from the vital features extracted by the encoder. The encoder and decoder are alternately trained to further weaken the discrepancy problem. The experimental results show that ERNet can effectively extract vital features of infrared and visible images and obtain higher quality fused images. Our fusion results can be found at https://github.com/suweijian1996/ERNet.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call