Abstract

This paper introduces a new adaptive atlas-based framework for the automated segmentation of different brain structures from infant diffusion tensor images (DTI). To model the brain images and their desired region maps, we used a joint Markov-Gibbs random field (MGRF) model that accounts for three image descriptors: (i) a 1st-order visual appearance to describe the empirical distribution of DTI extracted features, (ii) an adaptive shape model, and (iii) a 3D spatially invariant 2nd-order MGRF homogeneity descriptor. The 1st-order visual appearance descriptor is accurately modeled using a linear combination of discrete Gaussians (LCDG) model having positive and negative components. The proposed adaptive shape model is constructed from a prior atlas database built using a subset of co-aligned training data sets that is adapted during the segmentation process guided by the visual appearance characteristics of several DTI features. To accurately account for the large inhomogeneity of infant brains, the homogeneity descriptor is modeled by a 2nd-order translation and rotation invariant MGRF of region labels with analytically estimated potentials. The high accuracy of our segmentation approach was confirmed by testing it on 10 in-vivo infant DTI brain data sets using three metrics: the Dice similarity coefficient, the 95-percentile modified Hausdorff distance, and the absolute brain volume difference.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call