Abstract

This study aims to develop a deep learning method to effectively consider respiratory motion for the generation of realistic dose-volume histograms for more accurate and efficient propagation of organ and tumor contours from a target phase to all phases in lung 4DCT patient datasets. Our proposed method is a platform that performs Deformable Image Registration (DIR) of individual phase datasets in a simulation 4DCT and comprises a generator and discriminator. The generator accepts moving and target CTs as input and outputs the Deformation Vector Fields (DVFs) to match the two CTs. The generator is optimized during both forward and backward paths to enhance the bidirectionality of DVF generation. Further, the landmarks are used to weakly supervise the generator network, specifically through landmark-driven loss. The discriminator then judges the realism of the deformed CT to provide extra DVF regularization. A publicly available DIR-Lab dataset was used to evaluate the performance of the proposed method against other methods in the literature by calculating the DIR-Lab Target Registration Error (TRE). The proposed method outperformed other deep learning-based methods on the DIR-Lab datasets in terms of TRE. Bi-directional and landmark-driven loss were shown to be effective for obtaining high registration accuracy. The mean of TRE for the DIR-Lab datasets was 1.03±0.66 mm. These results demonstrated the feasibility and efficacy of our proposed method, which provides a potential method for realistic registration of phases in 4DCT lung datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call