Abstract

Background and ObjectiveEstimating the three-dimensional (3D) deformation of the lung is important for accurate dose delivery in radiotherapy and precise surgical guidance in lung surgery navigation. Additional 4D-CT information is often required to eliminate the effect of individual variations and obtain a more accurate estimation of lung deformation. However, this results in increased radiation dose. Therefore, we propose a novel method that estimates lung tissue deformation from depth maps and two CT phases per patient. MethodsThe method models the 3D motion of each voxel as a linear displacement along a direction vector, with a variable amplitude and phase that depend on the voxel location. The direction vector and amplitude are derived from the registration of the CT images at the end-of-exhale (EOE) and the end-of-inhale (EOI) phases. The voxel phase is estimated by a neural network. Coordinate convolution (CoordConv) is used to fuse multimodal data and embed absolute position information. The network takes the front and side views as well as the previous phase views as inputs to enhance accuracy. ResultsWe evaluate the proposed method on two datasets: DIR-Lab and 4D-Lung, and obtain average errors of 2.11 mm and 1.36 mm, respectively. The method achieves real-time performance of less than 7 ms per frame on a NVIDIA GeForce 2080Ti GPU. ConclusionCompared with previous methods, our method achieves comparable or even better accuracy with less CT phases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call