Abstract

Infrared and visible images are a pair of multi-source multi-sensors images. However, the infrared images lack structural details and visible images are impressionable to the imaging environment. To fully utilize the meaningful information of the infrared and visible images, a practical fusion method, termed as RCGAN, is proposed in this paper. In RCGAN, we introduce a pioneering use of the coupled generative adversarial network to the field of image fusion. Moreover, the simple yet efficient relativistic discriminator is applied to our network. By doing so, the network converges faster. More importantly, different from the previous works in which the label for generator is either infrared image or visible image, we innovatively put forward a strategy to use a pre-fused image as the label. This is a technical innovation, which makes the process of generating fused images no longer out of thin air, but from “existence” to “excellent.” The extensive experiments demonstrate the proposed RCGAN can produce a faithful fused image, which can efficiently persevere the rich texture from visible images and thermal radiation information from infrared images. Compared with traditional methods, it successfully avoids the complex manual designed fusion rules, and also shows a clear advantages over other deep learning-based fusion methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.