Abstract

We present Free Point Transformer (FPT) - a deep neural network architecture for non-rigid point-set registration. Consisting of two modules, a global feature extraction module and a point transformation module, FPT does not assume explicit constraints based on point vicinity, thereby overcoming a common requirement of previous learning-based point-set registration methods. FPT is designed to accept unordered and unstructured point-sets with a variable number of points and uses a "model-free" approach without heuristic constraints. Training FPT is flexible and involves minimizing an intuitive unsupervised loss function, but supervised, semi-supervised, and partially- or weakly-supervised training are also supported. This flexibility makes FPT amenable to multimodal image registration problems where the ground-truth deformations are difficult or impossible to measure. In this paper, we demonstrate the application of FPT to non-rigid registration of prostate magnetic resonance (MR) imaging and sparsely-sampled transrectal ultrasound (TRUS) images. The registration errors were 4.71 mm and 4.81 mm for complete TRUS imaging and sparsely-sampled TRUS imaging, respectively. The results indicate superior accuracy to the alternative rigid and non-rigid registration algorithms tested and substantially lower computation time. The rapid inference possible with FPT makes it particularly suitable for applications where real-time registration is beneficial.

Highlights

  • Multimodal image registration is a fundamental problem in medical imaging research wherein images from different modalities are transformed spatially so that corresponding anatomical structures in each image are aligned

  • Free Point Transformer (FPT)-Chamfer gives the lowest average Chamfer distance and target registration error (TRE) in most instances, while Coherent Point Drift (CPD) gives the lowest average for Hausdorff distance

  • Through evaluation in a challenging realworld multimodal image registration task with magnetic resonance (MR) and transrectal ultrasound (TRUS) images, FPT was found to be robust to the partial availability of data

Read more

Summary

Introduction

Multimodal image registration is a fundamental problem in medical imaging research wherein images from different modalities are transformed spatially so that corresponding anatomical structures in each image are aligned. Multimodal image registration, like unimodal registration methods, is historically divided into intensitybased methods and feature-based methods (Hajnal et al, 2001; Viergever et al, 2016). In the literature, these methods are distinguished according to whether the registration seeks to align image features that have been extracted explicitly, for instance, manual or algorithm-based identification of organ boundaries and other anatomical landmarks. The minimisation of the similarity metric is achieved by an iterative numerical optimization scheme

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call