Abstract
Most clinical image-guided radiotherapy workflows make use of alignment algorithms to map a plan to a daily cone-beam CT (CBCT) onto a patient’s simulation CT (simCT). Current methodologies of rigid and deformable registration have not demonstrated accuracy high enough to enable automated alignment or adaptive recontouring. Here we investigate a novel deep-learning framework for diffeomorphic deformable image registration (DIR) and analyze its performance with regard to adaptive recontouring. Our institutional treatment planning system and CBCT directory were queried for head and neck plans that were delivered to patients between 2016-2018. A self-normalizing, generative convolutional deep network was developed using the PyTorch API and trained on a platform with 8 NVIDIA K80 GPUs. Network input consisted of a concatenated simCT/CBCT pair and an output of a deformable vector field (DVF). This DVF was then applied to the simCT to warp the simCT onto the CBCT. The network was trained in an unsupervised fashion on the combination of three losses: the Kullback-Leibler loss to promote spatially smooth transforms, a novel bottleneck feature-map loss to prioritize complex anatomic features, and a pixelwise intensity difference loss to drive spatially accurate image reconstruction. In order to test the performance of the network, patient-specific contours were warped onto the CBCT using the DVF output by the network. Patients who underwent repeat planning during their radiotherapy were sampled and the 3D-dice coefficient of overlap was calculated between their initial contours and replanned contours and between the warped contours and replanned contours. All initial and replanned contours were generated by a board-certified radiation oncologist. A dataset of 48 patients was compiled, yielding 1728 simCT/CBCT pairs and 64 unique radiotherapy plans and structure sets. 36 patients were used for the training set, representing 1246 simCT/CBCT pairs. Training was performed over 100 epochs. Resulting warped scans showed strong registration by visual review and 3D dice comparison of the external contours. Quantitatively, the model showed a significantly improved 3D dice overlap of the warped contours to the replanned ground-truth contours (see table). The inference time on the trained model was less than 4 seconds. Our implementation of a novel deep-learning architecture shows fast, robust DIR in the setting of IGRT. Our results highlight the applicability of the recent advancements in deep-learning to essential components of radiation treatment: patient position verification and creation of treatment volumes. Further validation is required prior to implementing such a workflow in the radiation clinic for rapid IGRT.Abstract 1180; Table 1Initial-to-Replanned Contour 3D-Dice OverlapWarped-to-Replanned Contour 3D-Dice OverlappGTV0.781 ± 0.0620.873 ± 0.0450.041CTV0.822 ± 0.0510.901 ± 0.0330.032External0.907 ± 0.0690.948 ± 0.0510.028 Open table in a new tab
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Radiation Oncology*Biology*Physics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.