Abstract

The registration of computed tomography (CT) and cone-beam computed tomography (CBCT) plays a key role in image-guided radiotherapy (IGRT). However, the large intensity variation between CT and CBCT images limits the registration performance and its clinical application in IGRT. In this study, a learning-based unsupervised approach was developed to address this issue and accurately register CT and CBCT images by predicting the deformation field. A dual attention module was used to handle the large intensity variation between CT and CBCT images. Specifically, a scale-aware position attention block (SP-BLOCK) and a scale-aware channel attention block (SC-BLOCK) were employed to integrate contextual information from the image space and channel dimensions. The SP-BLOCK enhances the correlation of similar features by weighting and aggregating multi-scale features at different positions, while the SC-BLOCK handles the multiple features of all channels to selectively emphasize dependencies between channel maps. The proposed method was compared with existing mainstream methods on the 4D-LUNG data set. Compared to other mainstream methods, it achieved the highest structural similarity (SSIM) and dice similarity coefficient (DICE) scores of 86.34% and 89.74%, respectively, and the lowest target registration error (TRE) of 2.07 mm. The proposed method can register CT and CBCT images with high accuracy without the needs of manual labeling. It provides an effective way for high-accuracy patient positioning and target localization in IGRT.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call