Abstract
To develop a model-guided self-supervised deep learning MRI reconstruction framework called reference-free latent map extraction (RELAX) for rapid quantitative MR parameter mapping. Two physical models are incorporated for network training in RELAX, including the inherent MR imaging model and a quantitative model that is used to fit parameters in quantitative MRI. By enforcing these physical model constraints, RELAX eliminates the need for full sampled reference data sets that are required in standard supervised learning. Meanwhile, RELAX also enables direct reconstruction of corresponding MR parameter maps from undersampled k-space. Generic sparsity constraints used in conventional iterative reconstruction, such as the total variation constraint, can be additionally included in the RELAX framework to improve reconstruction quality. The performance of RELAX was tested for accelerated T1 and T2 mapping in both simulated and actually acquired MRI data sets and was compared with supervised learning and conventional constrained reconstruction for suppressing noise and/or undersampling-induced artifacts. In the simulated data sets, RELAX generated good T1 /T2 maps in the presence of noise and/or undersampling artifacts, comparable to artifact/noise-free ground truth. The inclusion of a spatial total variation constraint helps improve image quality. For the in vivo T1 /T2 mapping data sets, RELAX achieved superior reconstruction quality compared with conventional iterative reconstruction, and similar reconstruction performance to supervised deep learning reconstruction. This work has demonstrated the initial feasibility of rapid quantitative MR parameter mapping based on self-supervised deep learning. The RELAX framework may also be further extended to other quantitative MRI applications by incorporating corresponding quantitative imaging models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.