Accurately analyzing the rapid structural evolution of human brain in the first year of life is a key step in early brain development studies, which requires accurate deformable image registration. However, due to (a) dynamic appearance and (b) large anatomical changes, very few methods in the literature can work well for the registration of two infant brain MR images acquired at two arbitrary development phases, such as birth and one-year-old. To address these challenging issues, we propose a learning-based registration method, which can handle the anatomical structures and the appearance changes between the two infant brain MR images with possible time gap. Specifically, in the training stage, we employ a multioutput random forest regression and auto-context model to learn the evolution of anatomical shape and appearance from a training set of longitudinal infant images. To make the learning procedure more robust, we further harness the multimodal MR imaging information. Then, in the testing stage, for registering the two new infant images scanned at two different development phases, the learned model will be used to predict both the deformation field and appearance changes between the images under registration. After that, it becomes much easier to deploy any conventional image registration method to complete the remaining registration since the above-mentioned challenges for state-of-the-art registration methods have been well addressed. We have applied our proposed registration method to intersubject registration of infant brain MR images acquired at 2-week-old, 3-month-old, 6-month-old, and 9-month-old with the images acquired at 12-month-old. Promising registration results have been achieved in terms of registration accuracy, compared with the counterpart nonlearning based registration methods. The proposed new learning-based registration method have tackled the challenging issues in registering infant brain images acquired from the first year of life, by leveraging the multioutput random forest regression with auto-context model, which can learn the evolution of shape and appearance from a training set of longitudinal infant images. Thus, for the new infant image, its deformation field to the template and also its template-like appearances can be predicted by the learned models. We have extensively compared our method with state-of-the-art deformable registration methods, as well as multiple variants of our method, which show that our method can achieve higher accuracy even for the difficult cases with large appearance and shape changes between subject and template images.
Read full abstract