Abstract
Magnetic Resonance Imaging (MRI) is a pivotal neuroimaging technique capable of generating images with various contrasts, known as multi-modal images. The integration of these diverse modalities is essential for improving model performance across various tasks. However, in real clinical scenarios, acquiring MR images for all modalities is frequently hindered by factors such as patient comfort and scanning costs. Therefore, effectively fusing different modalities to synthesize missing modalities has become a research hot-spot in the field of smart healthcare, particularly in the context of the Internet of Medical Things (IoMT). In this study, we introduce a multi-modal coordinated fusion network (MCF-Net) with Patch Complementarity Pre-training. This network leverages the complementarity and correlation between different modalities to make the fusion of multi-modal MR images, addressing challenges in the IoMT. Specifically, we first employ a Patch Complementarity Mask Autoencoder (PC-MAE) for self-supervised pre-training. The complementarity learning mechanism is introduced to align masks and visual annotations between two modalities. Subsequently, a dual-branch MAE architecture and a shared encoder–decoder are adopted to facilitate cross-modal interactions within mask tokens. Furthermore, during the fine-tuning phase, we incorporate an Attention-Driven Fusion (ADF) module into the MCF-Net. This module synthesizes missing modal images by fusion of multi-modal features from the pre-trained PC-MAE encoder. Additionally, we leverage the pre-trained encoder to extract high-level features from both synthetic and corresponding real images, ensuring consistency throughout the training process. Our experimental findings showcase a notable enhancement in performance across various modalities with our fusion method, outperforming state-of-the-art techniques.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.