Abstract

Accurate image registration is important for quantifying dynamic brain development in the first year of life. However, it is challenging to deformable registration of infant brain magnetic resonance (MR) images because: (1) there are large anatomical and appearance variations in these longitudinal images; (2) there is a one-to-many correspondence in appearance between global anatomical regions and local small therein regions. In this paper, we apply a deformable registration scheme based on the global and local label-driven learning with convolution neural networks (CNN). Two to-be-registered patches are fed into an U-Net-like regression network. Then a dense displacement field (DDF) is obtained by optimizing the loss function between many pairs of label patches. Global and local label patch pairs are only leveraged to drive registration during training stage. During inference, the resulting 3D DDF is obtained by inputting two new MR images to the trained network. The highlight is that the global tissues, i.e. white matter (GM), gray matter (GM), cerebrospinal fluid (CSF), and the local hippocampi are well aligned at the same time without any priori ground-truth deformation. Especially for the local hippocampi, their Dice ratios between two aligned images are highly improved. Experiment results are given based on intra-subject and inter-subject registration of infant brain MR images between different time points, yielding higher accuracy in both global and local tissues compared with state-of-the-art registration methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call