Abstract

We propose a method for the reconstruction of parameter-maps in Quantitative Magnetic Resonance Imaging (QMRI). Because different quantitative parameter-maps differ from each other in terms of local features, we propose a method where the employed dictionary learning (DL) and sparse coding (SC) algorithms automatically estimate the optimal dictionary-size and sparsity level separately for each parameter-map. We evaluated the method on a T1-mapping QMRI problem in the brain using the BrainWeb data as well as in-vivo brain images acquired on an ultra-high field 7 T scanner. We compared it to a model-based acceleration for parameter mapping (MAP) approach, other sparsity-based methods using total variation (TV), Wavelets (Wl), and Shearlets (Sh) to a method which uses DL and SC to reconstruct qualitative images, followed by a non-linear (DL+Fit). Our algorithm surpasses MAP, TV, Wl, and Sh in terms of RMSE and PSNR. It yields better or comparable results to DL+Fit by additionally significantly accelerating the reconstruction by a factor of approximately seven. The proposed method outperforms the reported methods of comparison and yields accurate T1-maps. Although presented for T1-mapping in the brain, our method's structure is general and thus most probably also applicable for the the reconstruction of other quantitative parameters in other organs. From a clinical perspective, the obtained T1-maps could be utilized to differentiate between healthy subjects and patients with Alzheimer's disease. From a technical perspective, the proposed unsupervised method could be employed to obtain ground-truth data for the development of data-driven methods based on supervised learning.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.