Abstract

Non-rigid multi-modal medical image registration is a challenging task in the field of medical image processing and analysis because of the unpredictable complicated deformations and the nonfunctional intensity relation between images. The structural representation based registration (SRR) method can address these factors to some extent by transforming this challenging problem into the mono-modal image registration. However, the existing SSR algorithms generally rely on handcrafted features for structural representation, which tend to lead to inaccurate registered results. To address this problem, this paper has proposed a novel Laplacian Eigenmaps based deep learning network for 2D medical image registration. The proposed network is firstly used to extract the intrinsic features of different modal medical images. Then the self-similarity of extracted features is explored to construct the learning based data-adaptive descriptor (LDAD) for structural representations. The sum of squared differences (SSD) between the structural representations of the reference and floating images is used as the similarity metric to construct the objective function for image registration. The experimental results show that the LDAD based registration method can produce visually better registered results and provide higher registration accuracy for different test images than several state-of-the-art registration methods in terms of target registration error.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call