Abstract

Visible images contain clear texture information and high spatial resolution but are unreliable under nighttime or ambient occlusion conditions. Infrared images can display target thermal radiation information under day, night, alternative weather, and ambient occlusion conditions. However, infrared images often lack good contour and texture information. Therefore, an increasing number of researchers are fusing visible and infrared images to obtain more information from them, which requires two completely matched images. However, it is difficult to obtain perfectly matched visible and infrared images in practice. In view of the above issues, we propose a new network model based on generative adversarial networks (GANs) to fuse unmatched infrared and visible images. Our method generates the corresponding infrared image from a visible image and fuses the two images together to obtain more information. The effectiveness of the proposed method is verified qualitatively and quantitatively through experimentation on public datasets. In addition, the generated fused images of the proposed method contain more abundant texture and thermal radiation information than other methods.

Highlights

  • Image fusion involves the use of mathematical methods to comprehensively process important information acquired by multiple sensors to produce a composite image that is easier to understand, thereby greatly improving the utilization rate of the image information and the reliability and automation degree of systems for target detection and recognition

  • Visible images obtained by spectral reflection offer high resolution, excellent image quality, and rich background details but cannot detect objects under hidden or low light and night conditions. e advantages of visible and infrared images can be combined by constructing fused images that retain richer feature information, making them suitable for subsequent processing tasks

  • Common image fusion methods based on the spatial domain include linear weighted image fusion, false color image fusion, image fusion based on modulation, image fusion based on statistics, and image fusion based on neural networks [1, 2]

Read more

Summary

Introduction

Image fusion involves the use of mathematical methods to comprehensively process important information acquired by multiple sensors to produce a composite image that is easier to understand, thereby greatly improving the utilization rate of the image information and the reliability and automation degree of systems for target detection and recognition. Since the thermal radiation in infrared images and the texture information in visible images are different, multiscale decomposition methods are not suitable for the fusion of infrared and visible images. To overcome this problem, the developers of FusionGAN [16] proposed an infrared and visible image fusion method based on the novel perspective of generative adversarial networks (GANs) [17].

Related Studies
GAN and Its Derivatives
Experimental Validation on Fusion Performance
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.