Abstract

Deformable image registration is essential for subsequential medical image analysis. Although the existing convolutional neural network- (CNN) and transformer-based registration methods have achieved promising performance along with fast inference time, the limitations of CNNs and transformers on their own prevent us to promote further registration accuracy. To address this issue, we propose a dual-flow neural network for medical image registration, which consists of two encoders for extracting both local and global features, one is the CNN encoder responsible for mining local differences between the fixed and moving images, and the other is transformer encoder estimating the long-range spatial relationships between distant voxels in two images. Both global and local features are fed into a common decoder to estimate the displacement vector field. To validate the superiority of the dual-flow CNN and transformer network, we compare the proposed method with several state-of-the-art (SOTA) models on Tl-weighted image registration, the results demonstrate that the dual-flow CNN and transformer mixed model outperforms the SOTA methods, it can increase the similarities in surface and boundaries between the fixed and warped images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call