Abstract

Quantification of dynamic contrast-enhanced (DCE)-MRI has the potential to provide valuable clinical information, but robust pharmacokinetic modeling remains a challenge for clinical adoption. A 7-layer neural network called DCE-Qnet was trained on simulated DCE-MRI signals derived from the Extended Tofts model with the Parker arterial input function. Network training incorporated B1 inhomogeneities to estimate perfusion (Ktrans, vp, ve), tissue T1 relaxation, proton density and bolus arrival time (BAT). The accuracy was tested in a digital phantom in comparison to a conventional nonlinear least-squares fitting (NLSQ). In vivo testing was conducted in ten healthy subjects. Regions of interest in the cervix and uterine myometrium were used to calculate the inter-subject variability. The clinical utility was demonstrated on a cervical cancer patient. Test-retest experiments were used to assess reproducibility of the parameter maps in the tumor. The DCE-Qnet reconstruction outperformed NLSQ in the phantom. The coefficient of variation (CV) in the healthy cervix varied between 5 and 51% depending on the parameter. Parameter values in the tumor agreed with previous studies despite differences in methodology. The CV in the tumor varied between 1 and 47%. The proposed approach provides comprehensive DCE-MRI quantification from a single acquisition. DCE-Qnet eliminates the need for separate T1 scan or BAT processing, leading to a reduction of 10 min per scan and more accurate quantification.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.