Abstract

Traditional deep-learning-based fusion algorithms usually take the original image as input to extract features, which easily leads to a lack of rich details and background information in the fusion results. To address this issue, we propose a fusion algorithm, based on mutually guided image filtering and cross-transmission, termed MGFCTFuse. First, an image decomposition method based on mutually guided image filtering is designed, one which decomposes the original image into a base layer and a detail layer. Second, in order to preserve as much background and detail as possible during feature extraction, the base layer is concatenated with the corresponding original image to extract deeper features. Moreover, in order to enhance the texture details in the fusion results, the information in the visible and infrared detail layers is fused, and an enhancement module is constructed to enhance the texture detail contrast. Finally, in order to enhance the communication between different features, a decoding network based on cross-transmission is designed within feature reconstruction, which further improves the quality of image fusion. In order to verify the advantages of the proposed algorithm, experiments are conducted on the TNO, MSRS, and RoadScene image fusion datasets, and the results demonstrate that the algorithm outperforms nine comparative algorithms in both subjective and objective aspects.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.