Abstract

In various engineering fields, the fusion of infrared and visible images has important applications. However, in the current process of fusing infrared and visible images, there are problems with unclear texture details in the fused images and unbalanced displays of infrared targets and texture details, resulting in information loss. In this article, we propose an improved generative adversarial network (GAN) fusion model for fusing infrared and visible images. In the generator and discriminator network structure, we introduce densely connected blocks to connect the features between layers, improve network efficiency, enhance the network’s ability to extract source image information, and construct a content loss function using four losses, including an infrared gradient, visible intensity, infrared intensity, and a visible gradient, to maintain a balance between infrared radiation information and visible texture details, enabling the fused image to achieve ideal results. The effectiveness of the fusion method is demonstrated through ablation experiments on the TNO dataset, and compared with four traditional fusion methods and three deep learning fusion methods. The experimental results show that our method achieves five out of ten optimal evaluation indicators, with a significant improvement compared to other methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.