Abstract
Deep inspiration breath hold (DIBH) is a common method for managing respiratory motion in lung radiotherapy (RT) and has shown benefit for significant reduction of cardiovascular and pulmonary toxicity. Cone beam CT (CBCT) is used to capture 3D images of the patient in position for daily treatment, and multiple CBCT scans may be needed to fine tune the patient setup. Furthermore, multiple breath holds are often needed for a single CBCT acquisition (typically ~1 min). Because of differences in the tumor and healthy tissue positions between consecutive DIBHs, the inconsistent anatomy in the 2D projection images degrades the quality of the reconstructed CBCT. Moreover, multiple breath holds during the initial setup will increase the probability that a patient will become tired and less able to reproduce consistent DIBH during treatment delivery when it matters most. To address this important clinical issue, a proof-of-concept study was designed using a novel deep learning-based method to derive a 3D volumetric image from two perpendicular 2D projection images (kV-MV pair), thereby reducing the number of DIBHs needed for imaging. The proposed method – implemented with a feature matching network – derives feature maps from the 2D projections and re-aligns them to their projection angle in a Cartesian coordinate system. The 2D feature maps are rendered in 3D space via depth learning by the feature matching network. Finally, the 3D volume is derived from the 3D feature map. We conducted a simulation study using 10 patient cases (110 CT images). Each patient had a 4D CT scan that was split into 10 phase bins for motion evaluation and a DIBH CT scan during simulation and later received DIBH lung RT at our institution. Ray tracing through each phase-binned CT was used to simulate a 2D kV projection at gantry angle 0° and a MV projection at gantry angle 90°. Orthogonal 2D projections (200 projections) from 10 phases were used to train the network to be patient specific while the DIBH CT was held out for testing. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and structural similarity index metric (SSIM) achieved by our method are 93.1 HU and 92.5 HU, 21.7 dB and 15.6 dB, and 0.87 and 0.74 within body and tumor ROI, respectively. These results demonstrate the feasibility and efficacy of our proposed method for 3D imaging from two orthogonal kV and MV 2D projections, which provides a potential solution of fast 3D imaging for daily treatment setup of breath-hold lung RT to ensure treatment accuracy and effectiveness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.