Abstract
With various attention mechanisms were proposed, the application of attention mechanisms in the different deep learning tasks has occurred. Meanwhile, most practices and experiments demonstrate that the attention mechanism can help convolutional networks to capture the critical features and raise the performance of the CNNs. In image fusion field, few methods including attention mechanisms are used to integrate the images containing different modalities. In this paper, so, we utilize three types of attention mechanisms which are self-attention, dual attention and multi-scale attention to add into the basic network of image fusion for developing the performance of the fused images. Since the traditional convolutional neural networks and generative adversarial networks (GAN) have some shortcomings, the modified GAN is treated as the basic network. Besides, the dilated convolution is used into the basic network because it can enlarge the convolutional map and receptive field of the kernels under the same kernel size condition. By the experimental comparison, the multi-scale attention is the best practice for infrared and visible image fusion. And the extensive experimental results show that our method can enhance the contrast of the fused image and preserve more thermal and detail information.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.