The accurate segmentation of brain tissue in Magnetic Resonance Image (MRI) slices is essential for assessing neurological conditions and brain diseases. However, it is challenging to segment MRI slices because of the low contrast between different brain tissues and the partial volume effect. 2-Dimensional (2-D) convolutional networks cannot handle such volumetric image data well because they overlook spatial information between MRI slices. Although 3-Dimensional (3-D) convolutions capture volumetric spatial information, they have not been fully exploited to enhance representative ability of deep networks; moreover, they may lead to overfitting with insufficient training data. In this paper, we propose a novel convolutional mechanism, termed Rubik convolution, to capture multi-dimensional information between MRI slices. Rubik convolution rotates the axis of a set of consecutive slices, enabling 2-D convolution kernels to extract features of each axial plane simultaneously. Next, feature maps are rotated back to fuse multidimensional information by the Max-View-Maps. Furthermore, we propose an efficient 2-D convolutional network, namely Rubik-Net, where the residual connections and the bottleneck structure are used to enhance information transmission and reduce the number of network parameters. The Rubik-Net shows promising results on iSeg2017, iSeg2019, IBSR and BrainWeb datasets in terms of segmentation accuracy. In particular, we achieved the best results in 95th percentile Hausdorff distance and average surface distance in cerebrospinal fluid segmentation on the most challenging iSeg2019 dataset. The experiments indicate that Rubik-Net improves the accuracy and efficiency of medical image segmentation. Moreover, Rubik convolution can be easily embedded into existing 2-D convolutional networks.
Read full abstract