Abstract

Quantitative Magnetic Resonance Imaging (qMRI) signal model fitting is traditionally performed via non-linear least square (NLLS) estimation. NLLS is slow and its performance can be affected by the presence of different local minima in the fitting objective function. Recently, machine learning techniques, including deep neural networks (DNNs), have been proposed as robust alternatives to NLLS. Here we present a deep learning implementation of qMRI model fitting, which uses DNNs to perform the inversion of the forward signal model. We compare two DNN training strategies, based on two alternative definitions of the loss function, since at present it is not known which definition leads to the most accurate, precise and robust parameter estimation. In strategy 1 we define the loss as the \(l^2\)-norm of tissue parameter prediction errors, while in strategy 2 as the \(l^2\)-norm of MRI signal prediction errors. We compare the two approaches on synthetic and 3T in vivo saturation inversion recovery (SIR) diffusion-weighted (DW) MRI data, using a model for joint diffusion-T1 mapping. Strategy 1 leads to lower tissue parameter root mean squared errors (RMSEs) when realistic noise distributions are considered (e.g. Rician versus Gaussian). However, strategy 2 offers lower signal reconstruction RMSE, and allows training to be performed on both synthetic and actual in vivo MRI measurements. In conclusion, for the qMRI model considered here both strategies are valid choices for DNN-based fitting. Strategy 2 is more practical, as it does not require pre-computation of reference tissue parameters, but may lead to worse parameter estimation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.