Abstract
Traditional deformable image registration (DIR) algorithms such as optical flow and demons are iterative and slow especially for large 4D-CT datasets. In order to quickly register the 4D-CT lung images for treatment planning and target definition, the computational speed of the current DIRs needs to be improved. Deep learning-based DIR methods that enable direct transformation prediction are promising alternatives for 4D-CT DIR. In this study, we propose to integrate dilated inception module (DIM) model and self-attention gate (Self-AG) into deep learning framework for 4DCT lung DIR. To overcome the shortage of manually aligned ‘ground truth’ training datasets, the network was designed to train in an unsupervised manner. Instead of using only the fixed and moving images as input, we also included the gradient images in x, y, z directions of the fixed and moving images as input to provide the network additional information to help the transformation prediction. The DIM was able to extract multi-scale structural features for robust feature learning. Self-AG were applied at different scales throughout the encoding and decoding pathways to highlight the structure representing feature differences between moving image and fixed image. The network was trained using pairs of 3D image patches that were extracted from any two random phases of one 4D-CT images. The loss function of the proposed network contains three parts which are image similarity loss, adversarial loss and a regularization loss. The network was trained and tested on 25 patients’ 4D-CT datasets using five-fold out cross validation. The proposed method was evaluated using Mean absolute error (MAE), peak signal to noise ratio (PSNR) and normalized cross correlation (NCC) between the deformed image and the fixed image. MAE, PSNR and NCC were 19.2±6.5, 35.4±3.0 and 0.995±0.002 respectively. Target registration errors (TREs) were calculated using manually selected landmark pairs. The average TRE was 3.38 ± 2.36 mm, which was comparable to traditional DIR algorithms. To summarize, the proposed method was able to achieve comparable performance to that of the traditional DIRs while being orders of magnitude faster (less than a minute).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.