Abstract

Visible (VIS) and infrared (IR) image fusion (VIF) is a technique used to synthesize the fused image of high visual perception. Existing fusion methods typically work by discovering the commons underlying the two modalities and fusing them in the common space. However, these methods often ignore the modality differences such as fuzzy details in the IR image and their well-designed architectures also lead to slow fusion speed. To address these issues, we propose a real-time end-to-end VIF model based on layer decomposition and re-parameterization (LDRepFM). This model is composed of a Layer Decomposition Guidance Network (LDGNet) and a Re-parameterization Fusion Network (RepFNet). Firstly, the LDGNet is used to alleviate the visual quality degradation of the fused image by decomposing the IR image into structural layer and fuzzy layer. Secondly, in order to achieve a favorable trade-off between the fusion speed and evaluation metrics, the RepFNet is utilized to decouple the training-time multi-branch and inference-time plain architecture. Thirdly, the structural layer that has been decomposed by LDGNet is utilized in constructing the guidance fusion loss, which is aimed at optimizing RepFNet. Finally, experiments conducted on the publicly available TNO, RoadScene, M3FD, and RegDB datasets demonstrate the performance of the proposed method to be comparable to the state-of-the-art in terms of both visual effect and quantitative metrics. The code is publically available at https://github.com/luming1314/LDRepFM.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.