Abstract

A classic approach in medical image registration is to formulate an optimization problem based on the image pair of interest, and seek a deformation vector field (DVF) to minimize the corresponding objective, often iteratively. It has a clear focus on the targeted pair, but is typically slow. In contrast, more recent deep-learning-based registration offers a much faster alternative and can benefit from data-driven regularization. However, learning is a process to "fit" the training cohort, whose image or motion characteristics or both may differ from the pair of images to be tested, which is the ultimate goal of registration. Therefore, generalization gap poses a high risk with direct inference alone. In this study, we propose an individualized adaptation to improve test sample targeting, to achieve a synergy of efficiency and performance in registration. Using a previously developed network with an integrated motion representation prior module as the implementation backbone, we propose to adapt the trained registration network further for image pairs at test time to optimize the individualized performance. The adaptation method was tested against various characteristics shifts caused by cross-protocol, cross-platform, and cross-modality, with test evaluation performed on lung CBCT, cardiac MRI, and lung MRI,respectively. Landmark-based registration errors and motion-compensated image enhancement results demonstrated significantly improved test registration performance from our method, compared to tuned classic B-spline registration and network solutions without adaptation. We have developed a method to synergistically combine the effectiveness of pre-trained deep network and the target-centric perspective of optimization-based registration to improve performance on individual testdata.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.