In this paper, we propose a graph self-construction and fusion network (GSCFN) for semi-supervised brain tissue segmentation in Magnetic Resonance Imaging (MRI) by fusing multiple types of image features. Compared to the use of a single feature, various features bring complementary information and can contribute to a better graph representation with a great discriminative power increase. But to do so, two problems need to be solved. The first one consists in effectively inferring a graph from a Magnetic Resonance (MR) image so as to implicitly encode the segmentation information and the second in fully leveraging various features. To solve both problems, we propose an original brain MR image semi-supervised segmentation framework, called graph self-construction and fusion network. This one relies on two parts. In the first one, a graph self-construction network is utilized to obtain various graph representations of an MR image depending on different features. In the second, a multi-graph convolution network is proposed for the fusion of multiple graphs and features as well as for the classification of supervoxels which are treated as graph nodes. Experiments on the BrainWeb18 dataset and the Internet Brain Segmentation Repository 18 dataset validate the superiority of our scheme compared with approaches based on a single feature type, and some other state-of-the-art methods. The ablation experiment indicates that the proposed GSCFN can produce more accurate and reliable segmentation by seamlessly integrates multiple types of features.
Read full abstract