Abstract

To develop an unsupervised deep learning model with auto-mapped control volume (CV) from daily patient positioning CT (dCT) to planning computed tomography (pCT) for highly accurate patient positioning.An unsupervised learning framework is proposed to automatically generate the couch shifts (translations and rotations) for mapping CVs from dCT to pCT. Inputs to the network are the dCT, the pCT, and the CVs' locations within the pCT. The outputs are the transformational parameters of the dCT for head-and-neck cancer (HNC) patient positioning. We train the network to maximize image similarity between the CV in the pCT and dCT using normalized cross-correlation. We used a total of 158 HNC patients with 554 CT scans for network evaluation. Each patient underwent several CT scans at different time points. For the test cases, couch shifts are obtained by averaging translational and rotational parameters derived with different CVs. These means are then compared to ground-truth reference shifts obtained by the alignment of bony landmarks identified by an experienced radiation oncologist.Systematic/random positioning errors between the model prediction and the reference are smaller than 0.47/1.13 mm and 0.17/0.29° in translations and rotations, respectively. Pearson's correlation coefficient between model predictions and reference values exceeded 0.98. In comparison to standard registrations, the proposed method increased the proportion of cases registered within clinically accepted tolerance from 66.67% to 90.91%.A novel unsupervised learning technique was established to map CVs from pCT to dCT for HNC patient positioning. Our results show that fast and highly accurate HNC patient positioning is achievable by leveraging state-of-the-art deep learning strategies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.