This study evaluated StarGAN, a deep learning model designed to generate synthetic computed tomography (sCT) images from magnetic resonance imaging (MRI) and cone-beam computed tomography (CBCT) data using a single model. The goal was to provide accurate Hounsfield unit (HU) data for dose calculation to enable MRI simulation and adaptive radiation therapy (ART) using CBCT or MRI. We also compared the performance and benefits of StarGAN to the commonly used CycleGAN. StarGAN and CycleGAN were employed in this study. The dataset comprised 53 cases of pelvic cancer. Evaluation involved qualitative and quantitative analyses, focusing on synthetic image quality and dose distribution calculation. For sCT generated from CBCT, StarGAN demonstrated superior anatomical preservation based on qualitative evaluation. Quantitatively, CycleGAN exhibited a lower mean absolute error (MAE) for the body (42.8 ± 4.3 HU) and bone (138.2 ± 20.3), whereas StarGAN produced a higher MAE for the body (50.8 ± 5.2 HU) and bone (153.4 ± 27.7 HU). Dosimetric evaluation showed a mean dose difference (DD) within 2% for the planning target volume (PTV) and body, with a gamma passing rate (GPR) > 90% under the 2%/2mm criteria. For sCT generated from MRI, qualitative evaluation also favored the anatomical preservation provided by StarGAN. CycleGAN recorded a lower MAE (79.8 ± 14 HU for the body and 253.6 ± 30.9 HU for bone) compared with StarGAN (94.7 ± 7.4 HU for the body and 353.6 ± 34.9 HU for bone). Both models achieved a mean DD within 2% in the PTV and body, and GPR > 90%. While CycleGAN exhibited superior quantitative metrics, StarGAN was better in anatomical preservation, highlighting its potential for sCT generation in radiotherapy.
Read full abstract