Abstract

Mutual information (MI) quantifies the information that is shared between two random variables and has been widely used as a similarity metric for multi-modal and uni-modal image registration. A drawback of MI is that it only takes into account the intensity values of corresponding pixels and not of neighborhoods. Therefore, it treats images as "bag of words" and the contextual information is lost. In this work, we present Contextual Conditioned Mutual Information (CoCoMI), which conditions MI estimation on similar structures. Our rationale is that it is more likely for similar structures to undergo similar intensity transformations. The contextual analysis is performed on one of the images offline. Therefore, CoCoMI does not significantly change the registration time. We use CoCoMI as the similarity measure in a regularized cost function with a B-spline deformation field and efficiently optimize the cost function using a stochastic gradient descent method. We show that compared to the state of the art local MI based similarity metrics, CoCoMI does not distort images to enforce erroneous identical intensity transformations for different image structures. We further present the results on nonrigid registration of ultrasound (US) and magnetic resonance (MR) patient data from image-guided neurosurgery trials performed in our institute and publicly available in the BITE dataset. We show that CoCoMI performs significantly better than the state of the art similarity metrics in US to MR registration. It reduces the average mTRE over 13 patients from 4.12 mm to 2.35 mm, and the maximum mTRE from 9.38 mm to 3.22 mm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call