Abstract

Multi-modal medical image registration takes an essential role in image-based clinical diagnosis and surgical planning. It is not trivial due to appearance variations across different modalities. Rigidly aligning two images is used to register rigid body structure, and it is also usually the first step for deformable registration with a large discrepancy. In the field of computer vision, one well-established method for image alignment is to find corresponding points from two images, and image alignment is based on identified corresponding points. Our method lies in this category. Feature representation is crucial in finding corresponding points. However, conventional feature representation like SIFT does not take multi-modal information into account, and thus, it fails. In this paper, we propose a Convolution Neural Network Feature-based Registration (CNNFR) method for aligning the multi-modal medical image. The important component in this method is learning keypoint descriptors using contrastive metric learning, which minimizes the difference between two feature representations from two corresponding points and maximizes difference of two feature representation from two distant points. Also, a transfer learning-based CNNRF (TrCNNRF) is proposed to improve the generalization learning performance when the training data are insufficient. Experimental results demonstrate that the proposed methods can achieve superior performance regarding both accuracy and robustness, which can be used to rigidly register multi-modal images and provide an initial estimation for non-rigid registration in clinical practices.

Highlights

  • Medical imaging provides insights into the size, shape, and spatial relationships among anatomical structures

  • The learned keypoint descriptor extracted by our Siamese network can significantly outperform handed crafted feature descriptor

  • In this work, we proposed contrastive metric learningbased rigid multi-modal medical image registration methods Convolution Neural Network Featurebased Registration (CNNFR) and TrCNNFR distilled knowledge from natural images

Read more

Summary

Introduction

Medical imaging provides insights into the size, shape, and spatial relationships among anatomical structures. CT is handy for skeletal structures and dense tissue, whereas MRI provides a view of soft tissue. Aligning these different modalities can provide useful complementary information for more efficient cancer detection, disease diagnosis, and treatment planning. The associate editor coordinating the review of this manuscript and approving it for publication was Ruqiang Yan. There is abundant literature on the problem of multimodal medical image registration ( [1]–[15]). The goal of image registration is finding an optimal transformation to completely align the fixed and moving images together into one coordinate system. Due in no small appearance discrepancy across different modalities (i.e., CT and MR), robust and fast multi-modal image registration is still not fully solved

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.