Magnetic Resonance Imaging (MRI) is a widely used medical imaging technique that has become an essential tool for diagnosing various diseases and visualizing internal structures and tissues in the human body. MRI reconstruction and super-resolution are two techniques that can enhance image quality and accelerate the imaging process. However, current methods perform these tasks independently and fail to consider the correlations between them. Additionally, multi-contrast SR methods typically concatenate features from different contrasts without considering their correlation. In this paper, we propose a novel Cross-contrast Mutual Fusion Network (CMF-Net) that performs joint MRI reconstruction and super-resolution by enabling mutual propagation of feature representations between the two tasks. The CMF-Net framework consists of two stages: the first stage focuses on fusing multi-contrast features, while the second stage aims to learn task-specific information for joint MRI reconstruction and super-resolution. We propose a Multi-contrast Feature Aggregation (MFA) module to facilitate the integration of multi-contrast features. This module captures multi-scale information from auxiliary contrast to enhance the feature representation’s capability. Furthermore, a Multi-task Mutual Fusion (MMF) module is presented to integrate task-specific features, which explores the correlation between the two tasks to improve MR super-resolution performance. We evaluate the proposed CMF-Net approach on two public MR datasets. Quantitative and qualitative results demonstrate that our CMF-Net outperforms other state-of-the-art methods.
Read full abstract