Abstract
Reconstruction methods based on deep learning have greatly shortened the data acquisition time of magnetic resonance imaging (MRI). However, these methods typically utilize massive fully sampled data for supervised training, restricting their application in certain clinical scenarios and posing challenges to the reconstruction effect when high-quality MR images are unavailable. Recently, self-supervised methods have been developed that only undersampled MRI images participate in the network training. Nevertheless, due to the lack of complete referable MR image data, self-supervised reconstruction is prone to produce incorrect structure contents, such as unnatural texture details and over-smoothed tissue sites. To solve this problem, we propose a self-supervised Deep Contrastive Siamese Network (DC-SiamNet) for fast MR imaging. First, DC-SiamNet performs the reconstruction with a Siamese unrolled structure and obtains visual representations in different iterative phases. Particularly, an attention-weighted average pooling module is employed at the bottleneck layer of the U-shape regularization unit, which can effectively aggregate valuable local information of the underlying feature map in the generated representation vector. Then, a novel hybrid loss function is designed to drive the self-supervised reconstruction and contrastive learning simultaneously by forcing the output consistency across different branches in the frequency domain, the image domain, and the latent space. The proposed method is extensively evaluated with different sampling patterns on the IXI brain dataset and the MRINet knee dataset. Experimental results show that DC-SiamNet can achieve 0.93 in structural similarity and 33.984 dB in peak signal-to-noise ratio on the IXI brain dataset under 8x acceleration. It has better reconstruction accuracy than other methods, and the performance is close to the corresponding model trained with full supervision, especially when the sampling rate is low. In addition, generalization experiments verify that our method has a strong cross-domain reconstruction ability for different contrast brain images.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.