Abstract

With various attention mechanisms were proposed, the application of attention mechanisms in the different deep learning tasks has occurred. Meanwhile, most practices and experiments demonstrate that the attention mechanism can help convolutional networks to capture the critical features and raise the performance of the CNNs. In image fusion field, few methods including attention mechanisms are used to integrate the images containing different modalities. In this paper, so, we utilize three types of attention mechanisms which are self-attention, dual attention and multi-scale attention to add into the basic network of image fusion for developing the performance of the fused images. Since the traditional convolutional neural networks and generative adversarial networks (GAN) have some shortcomings, the modified GAN is treated as the basic network. Besides, the dilated convolution is used into the basic network because it can enlarge the convolutional map and receptive field of the kernels under the same kernel size condition. By the experimental comparison, the multi-scale attention is the best practice for infrared and visible image fusion. And the extensive experimental results show that our method can enhance the contrast of the fused image and preserve more thermal and detail information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call