Abstract

Slice-to-volume registration that achieves high accuracy in real time is one of the enabling technologies for clinical imaging scenarios. Recently, deep learning methods have been explored to significantly improve the accuracy and efficiency of the registration. Nevertheless, such a 2D/3D registration problem is very challenging due to several considerable barriers including highly computational cost brought by dense sampling in six dimensional parameter space and significant modal appearance differences. We proposed a Differentiable resampling based Slice-to-Volume Registration network, which can achieve real-time and accurate registration in both mono- and multi-modal scenarios. The proposed network learns the out-of-plane transformation parameters in the spherical coordinates that decide the content and is invariant to the remaining in-plane parameters by feeding 2D rigid transformed data on-the-fly, thus reducing the training sample size remarkably. To further improve the performance, we introduce an auxiliary image similarity connected by differentiable resampling. For multi-modal scenarios, we optionally include the enhanced modality independent neighborhood descriptor to unify different modalities into common space. Training size is considerably reduced to 30k per dataset with half of all possible orientations and 80% of the volume space covered while the average registration error drops to 1.84 mm and 6.49 mm respectively for mono- and multi-modal experiments. Extensive experiments show the proposed methods’ superiority over existing deep learning methods for real-time 2D/3D rigid registration in both mono- and multi-modal settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call