Abstract
Most existing deep learning-based multi-modal medical image fusion (MMIF) methods utilize single-branch feature extraction strategies to achieve good fusion performance. However, for MMIF tasks, it is thought that this structure cuts off the internal connections between source images, resulting in information redundancy and degradation of fusion performance. To this end, this paper proposes a novel unsupervised network, termed CEFusion. Different from existing architecture, a cross-encoder is designed by exploiting the complementary properties between the original image to refine source features through feature interaction and reuse. Furthermore, to force the network to learn complementary information between source images and generate the fused image with high contrast and rich textures, a hybrid loss is proposed consisting of weighted fidelity and gradient losses. Specifically, the weighted fidelity loss can not only force the fusion results to approximate the source images but also effectively preserve the luminance information of the source image through weight estimation, while the gradient loss preserves the texture information of the source image. Experimental results demonstrate the superiority of the method over the state-of-the-art in terms of subjective visual effect and quantitative metrics in various datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.