Abstract
We propose a new, adaptive local measure based on gradient orientation similarity for the purposes of multimodal image registration. We embed this metric into a hierarchical registration framework, where we show that registration robustness and accuracy can be improved by adapting both the similarity metric and the pixel selection strategy to the Gaussian blurring scale and to the modalities being registered. A computationally efficient estimation of gradient orientations is proposed based on patch-wise rigidity. We have applied our method to both rigid and non-rigid multimodal registration tasks with different modalities. Our approach outperforms mutual information (MI) and previously proposed local approximations of MI for multimodal (e.g. CT/MRI) brain image registration tasks. Furthermore, it shows significant improvements in terms of mTRE over standard methods in the highly challenging clinical context of registering pre-operative brain MRI to intra-operative US images.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.