Abstract

Quantitative cone beam CT (CBCT) is the foundation for image-guided radiation therapy, improving treatment setup, tumor delineation and dose calculation. However, CBCT images suffer from severe artifacts, limiting clinical utility. Deep learning can overcome these limitations, boosting radiographic and dosimetric quality critical for online adaptive radiotherapy (ART). We hypothesize adapted contrastive unpaired translation (CUT), a recent method for image-to-image translation of photographic images, can improve CBCT quality while reducing compute time, demonstrating utility for ART. Same-day CBCT and quality assurance CT (QACT) images acquired from 79 patients receiving proton therapy for prostate cancer between 2019 and 2020 at a single institution were retrospectively collected. QACT images were acquired for quality assurance in accordance with institutional policy. Seventy-nine patients yielded 102 non-contrast CBCT-QACT image sets. Each QACT image was rigidly registered to the corresponding CBCT and resampled to 1 × 1 × 2 mm to establish uniform voxel size and spacing. CBCT images were randomly shuffled prior to input to the CUT model for unsupervised training and QACT-quality synthetic CT images were generated as outputs. We compared mean absolute error (MAE), structural similarity index measure (SSIM), and Fréchet inception distance (FID) against same-day QACT. MAE, SSIM, and FID were compared for the CycleGAN and CUT data relative to input QACT and are reported as the mean across five-fold cross-validation ± standard error. CUT achieved superior performance in MAE (19.5 ± 3.9 HU vs. cycleGAN 47.1 ± 25.4) and FID (31.5 ± 6.6 vs cycleGAN 75.9 ± 41.3). MAE indicates pixel-level correspondence to QACT HU intensity values, making the synthetic outputs of CUT useful for dose calculations during ART. FID further demonstrates perceptual visual similarity. SSIM for CycleGAN (0.7 ± 0.2) and CUT (0.8 ± 0.0) were similar, indicating acceptable reproducibility of global structure. CUT was faster and lighter than CycleGAN. CycleGAN contained a total of 28,286,000 parameters; CUT contained 14,703,000, approximately half that of CycleGAN. As a result, CycleGAN computes on a single CT image slice over 0.33s while CUT requires just 0.18s. The contrastive method investigated here was demonstrated to be faster and more accurate than CycleGAN, requiring fewer networks and parameters to achieve superior performance. We demonstrated anatomic boundary preservation and HU fidelity superior to cycleGAN while significantly reducing compute time. We plan to investigate the use of these synthetic CT images in automated segmentation prior to exploration of CUT in a prospective setting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call