Abstract

Automated segmentation of brain tumor is of great importance for the diagnosis and treatment. Although manual segmentation could achieve high accuracy, it is time and labor consuming, and not available batch processing. Recently, deep neural networks represented by U-Net have been successfully applied on medical image segmentation and achieved state-of-the-art performance. However, there is no standard segmentation method that can produce desirable results for all images, various segmentation methods are optimized to deal with specific areas and imaging modes, which have strong pertinence. In this paper, we propose a U-Net like by integrating residual blocks, convolutions with dense connection and auxiliary outputs for multi-class brain tumor segmentation. Firstly, residual blocks are used in the contracting and expanding path to extract deep semantic information of images. Secondly, densely connected convolutions are applied in the last layer of contracting path to strengthen feature propagation and feature reuse. Three auxiliary outputs are then employed as deep supervision before each upper sampling to decrease the probabilities of vanishing gradient and avoid overfitting. Finally, we use weighted cross entropy and generalized dice loss as a fused loss function to address the class imbalance problem in brain tumor data. The proposed model is evaluated on the BraTS 2018 training dataset and achieves state-of-the-art performance, with average dice scores of 0.884, 0.806, 0.702 for the whole tumor, tumor core and enhancing tumor core, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call