Abstract

We introduce a strategy for learning image registration without acquired imaging data, producing powerful networks agnostic to contrast introduced by magnetic resonance imaging (MRI). While classical registration methods accurately estimate the spatial correspondence between images, they solve an optimization problem for every new image pair. Learning-based techniques are fast at test time but limited to registering images with contrasts and geometric content similar to those seen during training. We propose to remove this dependency on training data by leveraging a generative strategy for diverse synthetic label maps and images that exposes networks to a wide range of variability, forcing them to learn more invariant features. This approach results in powerful networks that accurately generalize to a broad array of MRI contrasts. We present extensive experiments with a focus on 3D neuroimaging, showing that this strategy enables robust and accurate registration of arbitrary MRI contrasts even if the target contrast is not seen by the networks during training. We demonstrate registration accuracy surpassing the state of the art both within and across contrasts, using a single model. Critically, training on arbitrary shapes synthesized from noise distributions results in competitive performance, removing the dependency on acquired data of any kind. Additionally, since anatomical label maps are often available for the anatomy of interest, we show that synthesizing images from these dramatically boosts performance, while still avoiding the need for real intensity images. Our code is available at https://w3id.org/synthmorph.

Highlights

  • I MAGE registration estimates spatial correspondences between image pairs and is a fundamental component ofM

  • Given the central importance of registration tasks within and across contrasts, and within and across subjects, the goal of this work is a learning-based framework for registration agnostic to magnetic resonance imaging (MRI) contrast: we propose a strategy for training networks that excel both within contrasts as well as across contrasts (e.g. T1-weighted contrast (T1w) to T2-weighted contrast (T2w)), even if the test contrasts are not observed during training

  • We test NiftyReg [13] with the default cost function (NMI) and recommended parameters, and we enable its diffeomorphic model with stationary velocity field (SVF) integration as in our approach

Read more

Summary

Introduction

I MAGE registration estimates spatial correspondences between image pairs and is a fundamental component of. V. Dalca are with the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA and with the Department of Radiology, Harvard Medical School, Boston, MA 02115, USA B. E. Iglesias are with the Centre for Medical Image Computing, University College London, WC1E 6BT, UK. V. Dalca are with the Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA 02139, USA

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.