Objective. The implementation of deep learning in magnetic resonance imaging (MRI) has significantly advanced the reduction of data acquisition times. However, these techniques face substantial limitations in scenarios where acquiring fully sampled datasets is unfeasible or costly.Approach. To tackle this problem, we propose a fusion enhanced contrastive self-supervised learning (FCSSL) method for parallel MRI reconstruction, eliminating the need for fully sampledk-space training dataset and coil sensitivity maps. First, we introduce a strategy based on two pairs of re-undersampling masks within a contrastive learning framework, aimed at enhancing the representational capacity to achieve higher quality reconstruction. Subsequently, a novel adaptive fusion network, trained in a self-supervised learning manner, is designed to integrate the reconstruction results of the framework.Results. Experimental results on knee datasets under different sampling masks demonstrate that the proposed FCSSL achieves superior reconstruction performance compared to other self-supervised learning methods. Moreover,the performance of FCSSL approaches that of the supervised methods, especially under the 2DRU and RADU masks, but no need for fully sample data. The proposed FCSSL, trained under the 3× 1DRU and 2DRU masks, can effectively generalize to unseen 1D and 2D undersampling masks, respectively. For target domain data that exhibit significant differences from source domain data, the proposed model, fine-tuned with just a few dozen instances of undersampled data in the target domain, achieves reconstruction performance comparable to that achieved by the model trained with the entire set of undersampled data.Significance. The novel FCSSL model offers a viable solution for reconstructing high-quality MR images without needing fully sampled datasets, thereby overcoming a major hurdle in scenarios where acquiring fully sampled MR data is difficult.
Read full abstract