Abstract

The purpose of this study is to investigate the effect of different magnetic resonance (MR) sequences on the accuracy of deep learning-based synthetic computed tomography (sCT) generation in the complex head and neck region. Four sequences of MR images (T1, T2, T1C, and T1DixonC-water) were collected from 45 patients with nasopharyngeal carcinoma. Seven conditional generative adversarial network (cGAN) models were trained with different sequences (single channel) and different combinations (multi-channel) as inputs. To further verify the cGAN performance, we also used a U-net network as a comparison. Mean absolute error, structural similarity index, peak signal-to-noise ratio, dice similarity coefficient, and dose distribution were evaluated between the actual CTs and sCTs generated from different models. The results show that the cGAN model with multi-channel (i.e., T1+T2+T1C+T1DixonC-water) as input to predict sCT achieves higher accuracy than any single MR sequence model. The T1-weighted MR model achieves better results than T2, T1C, and T1DixonC-water models. The comparison between cGAN and U-net shows that the sCTs predicted by cGAN retains additional image details are less blurred and more similar to the actual CT. Conditional generative adversarial network with multiple MR sequences as model input shows the best accuracy. The T1-weighted MR images provide sufficient image information and are suitable for sCT prediction in clinical scenarios with limited acquisition sequences or limited acquisition time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call