Abstract

Multi-modal medical image fusion aims to integrate distinct imaging modalities to yield more comprehensive and precise medical images, which can benefit the subsequent image analysis tasks. However, prevailing state-of-the-art image fusion methods, despite substantial advancements, do not explicitly address the efficient handling of complementary information between modalities. Moreover, most current multi-modal medical image fusion methods encounter challenges when integrating with practical tasks, which lack the guidance of semantic information, thus hindering the generation of high-quality images for accurate identification of lesion areas. To deal with these challenges, this paper introduces a novel semantic information-guided modality-specific fusion network for multi-modal magnetic resonance (MR) images, named SIMFusion. Specifically, we propose the decomposition branch to capture common and specific features from MR images of different modalities for fusion to reduce information redundancy through correlated mutual information loss. Subsequently, we obtain their semantic characteristics via the semantic branch utilizing a pre-train segmentation network and, finally, achieve an adaptive balance between the two sets of features through a specialized fusion strategy. Extensive experiments demonstrate the superiority of SIMFusion over existing competing techniques on both BraTS2019 and ISLES2022 datasets, with 8.1% improvement in MI and 38.7% in VIFF in T2-T1ce image pairs, highlighting the potential of the proposed method as a promising solution for MR image fusion in practical applications. Our code will be released at https://github.com/Zhangxw-ustc/SIMFusion.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.