Abstract

Multimodal medical image registration has an important role in monitoring tumor growth, radiotherapy, and disease diagnosis. Deep-learning-based methods have made great progress in the past few years. However, its success depends on large training datasets, and the performance of the model decreases due to overfitting and poor generalization when only limited data are available. In this paper, a multimodal medical image registration framework based on few-shot learning is proposed, named reverse-net, which can improve the accuracy and generalization ability of the network by using a few segmentation labels. Firstly, we used the border enhancement network to enhance the ROI (region of interest) boundaries of T1 images to provide high-quality data for the subsequent pixel alignment stage. Secondly, through a coarse registration network, the T1 image and T2 image were roughly aligned. Then, the pixel alignment network generated more smooth deformation fields. Finally, the reverse teaching network used the warped T1 segmentation labels and warped images generated by the deformation field to teach the border enhancement network more structural knowledge. The performance and generalizability of our model have been evaluated on publicly available brain datasets including the MRBrainS13DataNii-Pro, SRI24, CIT168, and OASIS datasets. Compared with VoxelMorph, the reverse-net obtained performance improvements of 4.36% in DSC on the publicly available MRBrainS13DataNii-Pro dataset. On the unseen dataset OASIS, the reverse-net obtained performance improvements of 4.2% in DSC compared with VoxelMorph, which shows that the model can obtain better generalizability. The promising performance on dataset CIT168 indicates that the model is practicable.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.