Abstract

Liver cancer is one of the leading causes of cancer death. Accurate and automatic liver tumor segmentation methods are urgent needs in clinical practice. Currently, Fully Convolutional Network and U-Net framework have achieved good results in medical image segmentation tasks, but there is still room for improvement. The traditional U-Net extracted a large number of low-level features, and the detailed features cannot be transmitted to deeper layers, resulting in poor segmentation ability. Therefore, this paper proposed a novel liver tumor segmentation network with contextual parallel attention and dilated convolution, called CPAD-Net. The proposed network applies a subsampled module, which has the same dimensionality reduction function as max-pooling without losing detailed features. CPAD-Net employs a contextual parallel attention module at skip connection. The module fuses contextual multi-scale features and extracts channel-spatial features in parallel. These features are concatenated with deep features to narrow the semantic gap and increase detailed information. Hybrid dilated convolution and double-dilated convolution are used in the encoding and decoding stages to expand the network receptive field. Dropout is added after each hybrid dilated convolution block to prevent overfitting. The efficacy of the proposed network is proved by widespread experimentation on two public datasets (LiTS2017 and 3Dircadb-01) and a clinical dataset from the Affiliated Hospital of Hebei University. The proposed network achieved Dice scores of 74.2%, 73.7% and 73.26%. The experimental results show that the proposed network outperforms most segmentation networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call