Abstract

Deformable multi-modal medical image registration aligns the anatomical structures of different modalities to the same coordinate system through a spatial transformation. Due to the difficulties of collecting ground-truth registration labels, existing methods often adopt the unsupervised multi-modal image registration setting. However, it is hard to design satisfactory metrics to measure the similarity of multi-modal images, which heavily limits the multi-modal registration performance. Moreover, due to the contrast difference of the same organ in multi-modal images, it is difficult to extract and fuse the representations of different modal images. To address the above issues, we propose a novel unsupervised multi-modal adversarial registration framework that takes advantage of image-to-image translation to translate the medical image from one modality to another. In this way, we are able to use the well-defined uni-modal metrics to better train the models. Inside our framework, we propose two improvements to promote accurate registration. First, to avoid the translation network learning spatial deformation, we propose a geometry-consistent training scheme to encourage the translation network to learn the modality mapping solely. Second, we propose a novel semi-shared multi-scale registration network that extracts features of multi-modal images effectively and predicts multi-scale registration fields in an coarse-to-fine manner to accurately register the large deformation area. Extensive experiments on brain and pelvic datasets demonstrate the superiority of the proposed method over existing methods, revealing our framework has great potential in clinical application.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.