Brain segmentation from neonatal MRI images is a very challenging task due to large changes in the shape of cerebral structures and variations in signal intensities reflecting the gestational process. In this context, there is a clear need for segmentation techniques that are robust to variations in image contrast and to the spatial configuration of anatomical structures. In this work, we evaluate the potential of synthetic learning, a contrast-independent model trained using synthetic images generated from the ground truth labels of very few subjects. We base our experiments on the dataset released by the developmental Human Connectome Project, for which high-quality images are available for more than 700 babies aged between 26 and 45 weeks postconception. First, we confirm the impressive performance of a standard UNet trained on a few volumes, but also confirm that such models learn intensity-related features specific to the training domain. We then confirm the robustness of the synthetic learning approach to variations in image contrast. However, we observe a clear influence of the age of the baby on the predictions. We improve the performance of this model by enriching the synthetic training set with realistic motion artifacts and over-segmentation of the white matter. Based on extensive visual assessment, we argue that the better performance of the model trained on real T2w data may be due to systematic errors in the ground truth. We propose an original experiment allowing us to show that learning from real data will reproduce any systematic bias affecting the training set, while synthetic models can avoid this limitation. Overall, our experiments confirm that synthetic learning is an effective solution for segmenting neonatal brain MRI. Our adapted synthetic learning approach combines key features that will be instrumental for large multisite studies and clinical applications.
Read full abstract