Abstract

Medical image registration (MIR) is an important issue of medical image research field, and the purpose of MIR is to establish a spatial correspondence between two medical images. Previous MIR methods based on supervised deep learning usually employ ground-truth or manual landmarks to guide network training. Since ground-truth and manual landmarks are difficult to obtain in practice, the implementation of MIR methods based on supervised deep learning is practically challenging. Currently, most of the MIR methods based on unsupervised deep learning utilize patch-based to learn the local spatial transformation from one image patch to another corresponding patch. Although the patch-based method has satisfactory performance locally, it would generate grid-like artifacts that occur on the edge of the patches during the patch fusion. In this paper, we design a fully convolutional network (FCN), in which the input of the network is a pair of images and the output is the displacement field. The displacement field is applied to warp the moving image to the fixed image through the spatial transformer network (STN) and achieve end-to-end image registration. There is no any supervision information applied in the proposed method. We evaluate the performance of the proposed method using the public DIR-Lab dataset. The average TRE value obtained is 2.03 (1.42), and the registration time is less than one second. The results demonstrate that the method we designed has satisfactory registration capability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call