Abstract
A long-standing problem in image-guided radiotherapy is that inferior intraoperative images present a difficult problem for automatic registration algorithms. Particularly for digital radiography (DR) and digitally reconstructed radiograph (DRR), the blurred, low-contrast, and noisy DR makes the multimodal registration of DR–DRR challenging. Therefore, we propose a novel CNN-based method called CrossModalNet to exploit the quality preoperative modality (DRR) for handling the limitations of intraoperative images (DR), thereby improving the registration accuracy. The method consists of two parts: DR–DRR contour predictions and contour-based rigid registration. We have designed the CrossModal Attention Module and CrossModal Refine Module to fully exploit the multiscale crossmodal features and implement the crossmodal interactions during the feature encoding and decoding stages. Then, the predicted anatomical contours of DR–DRR are registered by the classic mutual information method. We collected 2486 patient scans to train CrossModalNet and 170 scans to test its performance. The results show that it outperforms the classic and state-of-the-art methods with 95th percentile Hausdorff distance of 5.82 pixels and registration accuracy of 81.2%. The code is available at https://github.com/lc82111/crossModalNet.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.