Abstract

In recent years, the registration method based on deep learning has received extensive attention from scholars due to its superiority in real-time performance. Most of the work directly use convolutional neural networks (CNNs) to map the image to be registered into the transform space. However, the receptive field of CNNs is limited, and multi-layer convolution superposition is needed to obtain a relatively large receptive field. Transformer-based methods can better express spatial relationships through attention mechanisms. However, the self-attention and the multi-head mechanisms make each small block calculate the relationship with other small blocks regardless of distance. Due to the limited moving range of corresponding voxel points in the medical images, this long-distance dependence may cause the model to be interfered by long-distance voxels. In this paper, we convert the spatial transformation of the corresponding voxel points into the calculation of the basic vector basis to propose the SV-basis module and design a two-stage multi-scale registration model. Experiments are carried out on brain and lung datasets to prove the effectiveness and universality of the proposed registration method. According to the anatomical characteristics of medical images, the corresponding loss function is designed to introduce mask information into the registration task. The experimental results show that the proposed method can accurately register brain and lung images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call