Abstract

Feature matching, which refers to establishing the correspondence of regions between two images (usually voxel features), is a crucial prerequisite of feature-based registration. For deformable image registration tasks, traditional feature-based registration methods typically use an iterative matching strategy for interest region matching, where feature selection and matching are explicit, but specific feature selection schemes are often useful in solving application-specific problems and require several minutes for each registration. In the past few years, the feasibility of learning-based methods, such as VoxelMorph and TransMorph, has been proven, and their performance has been shown to be competitive compared to traditional methods. However, these methods are usually single-stream, where the two images to be registered are concatenated into a 2-channel whole, and then the deformation field is output directly. The transformation of image features into interimage matching relationships is implicit. In this paper, we propose a novel end-to-end dual-stream unsupervised framework, named TransMatch, where each image is fed into a separate stream branch, and each branch performs feature extraction independently. Then, we implement explicit multilevel feature matching between image pairs via the query-key matching idea of the self-attention mechanism in the Transformer model. Comprehensive experiments are conducted on three 3D brain MR datasets, LPBA40, IXI, and OASIS, and the results show that the proposed method achieves state-of-the-art performance in several evaluation metrics compared to the commonly utilized registration methods, including SyN, NiftyReg, VoxelMorph, CycleMorph, ViT-V-Net, and TransMorph, demonstrating the effectiveness of our model in deformable medical image registration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call