Abstract

Multi-modal image registration is a momentous technology in medical image processing and analysis. In order to improve the robustness and accuracy of multi-modal rigid image registration, a novel learning-based dissimilarity function is proposed in this paper. This novel dissimilarity function is based on measuring the dissimilarity between the joint intensity distribution of the testing image pair and the expected intensity distributions, which is learned from a registered image pair, with Bhattacharyya distances. Then, the aim of the registration process is to minimize the dissimilarity function. Eight hundred randomized CT - T1 registrations were performed and evaluated by the Retrospective Image Registration Evaluation (RIRE) project. The experimental results demonstrate that the proposed method can achieve higher robustness and accuracy, as compared with a closely related approach and a state-of-the-art method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call