Abstract

Approximately 2.5% of the proton range uncertainty comes from computed tomography (CT) number to material characteristic conversion. We aim to conquer this CT-to-material conversion error by proposing a multimodal imaging framework to enable deep learning (DL)-based material mass density inference using dual-energy CT (DECT) and magnetic resonance imaging (MRI). To ensure the robustness of DL models, we integrated physics insights into the framework to regularize DL models and achieve DL using small datasets. Five MRI-compatible phantoms were created from tissue-mimicking materials that served as a ground true reference to validate the proposed framework. The reference mass densities for each phantom were measured by a 150 MeV proton beam. Multimodal images were acquired from T1- and T2-weighted images and DECT images as training and validation data for DL. Residual networks (ResNet) were implemented to evaluate the feasibility of the proposed framework. ResNet-DE-MR denotes that ResNet was trained with MRI and DECT images, while ResNet-DE presents that only DECT images were used to train ResNet. ResNet was also compared to an empirical DECT model. Meanwhile, a retrospective patient case was included in the study to demonstrate the proof of concept for the proposed framework. The phantom validation experiment showed that ResNet-DE-MR achieved mass density errors of -0.4%, 0.3%, 0.4%, 0.7%, and -0.2% for adipose, muscle, liver, skin, and bone. The proposed DL-based multimodal imaging framework was demonstrated to enable accurate material mass density inference using DECT and MR images. The framework can potentially improve the treatment quality for proton therapy by reducing proton range uncertainty.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.