Abstract

The goal of image fusion is to combine the complementary features of two images to generate an information-rich fused image. However, taking into account both detail and scene information is difficult for existing image fusion algorithms. Therefore, we propose an infrared and visible image fusion method combining details and scene information (DSFusion). Specifically, we first design a local attention module (LAM), which performs feature extraction on the source image from multiple perspectives in order to better preserve minute detail information. Moreover, in order to distinguish and highlight the differences between the two modal images, we improved the channel attention module. Finally, we design a new loss function that can effectively balance the detail and scene information of the fusion image. Extensive testing on publicly available datasets demonstrates that DSFusion surpasses state of the art in both qualitative and quantitative evaluation. Furthermore, promising outcomes have been obtained in generalization experiments by directly expanding the trained model to other datasets, indicating the model’s excellent generalization capability. The code is available at https://github.com/LKZ1584905069/DSFusion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call