Automatic image segmentation is an indispensable step in medical image analysis, and it plays an important role in computer-assisted radiotherapy, disease diagnosis and treatment effect evaluation. The difficulty of medical image segmentation is greatly enhanced by the blurry nature of medical image, the complex shape of objects and the existence of noise. In recent years, segmentation methods based on deep learning, especially convolutional neural network, have made great progress in improving the accuracy of medical image segmentation. However, these methods also have poor ability to distinguish similar objects in different environments, because of insufficient use of the local context information of images during the process of feature extraction. To address this problem, this paper proposes a deep neural network (LCP-Net) that can perceive multi-scale context information of images. LCP-Net improves the utilization of context information of feature encoders by using Parallel Dilated Convolution (PDC) and Local Context Embedding (LCE), which are beneficial to get feature map rich in environmental information. In addition, to improve the segmentation accuracy of the model for small objects and alleviate the swing issue during training, we propose a novel improved cross-entropy loss (DDCLoss), which can adaptively adjust the weight of loss according to the certainty and deviation distance of the predicted pixel value and enable the model to focus on optimizing the sample points with low certainty and tend to be mislabeled. Experimental results on three different medical datasets demonstrate that compared with the state-of-the-art medical image segmentation models, our proposed LCP-Net can achieve better segmentation performance.
Read full abstract