Abstract
Nonrigid multimodal image registration remains a challenging task in medical image processing and analysis. The structural representation (SR)-based registration methods have attracted much attention recently. However, the existing SR methods cannot provide satisfactory registration accuracy due to the utilization of hand-designed features for structural representation. To address this problem, the structural representation method based on the improved version of the simple deep learning network named PCANet is proposed for medical image registration. In the proposed method, PCANet is firstly trained on numerous medical images to learn convolution kernels for this network. Then, a pair of input medical images to be registered is processed by the learned PCANet. The features extracted by various layers in the PCANet are fused to produce multilevel features. The structural representation images are constructed for two input images based on nonlinear transformation of these multilevel features. The Euclidean distance between structural representation images is calculated and used as the similarity metrics. The objective function defined by the similarity metrics is optimized by L-BFGS method to obtain parameters of the free-form deformation (FFD) model. Extensive experiments on simulated and real multimodal image datasets show that compared with the state-of-the-art registration methods, such as modality-independent neighborhood descriptor (MIND), normalized mutual information (NMI), Weber local descriptor (WLD), and the sum of squared differences on entropy images (ESSD), the proposed method provides better registration performance in terms of target registration error (TRE) and subjective human vision.
Highlights
Nonrigid multimodal image registration is very important for medical image processing and analysis
This paper has proposed to realize the structural representation for image registration based on PCANet
Step 1: Train the PCANet using a large amount of training data and obtain the convolution kernels of the two hidden layers; Step 2: Calculate the structural representation result PCANet-based structural representation (PSR) Ir of the reference image Ir and PSR I f of the floating image I f according to Equations (7)–(9)
Summary
Nonrigid multimodal image registration is very important for medical image processing and analysis. The Weber local descriptor (WLD)-based SR method has been presented by Yang et al [16] This method is sensitive to image noise and sometimes it cannot provide consistent structural representation results for the same organs in multimodal images. Due to the adoption of supervised learning in these methods, their performance was influenced by the scarcity of the labeled medical imaging data and the inaccurate labeled training samples produced by the traditional registration method for training the CNN To address these problems, this paper has proposed to realize the structural representation for image registration based on PCANet. PCANet, proposed by Chan et al [29], involves an input layer, hidden layers, and an output layer.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have