Abstract

Due to the space inconsistency between benchmark image and segmentation result in many existing semantic segmentation algorithms for abdominal CT images, an improved model based on the basic framework of DeepLab-v3 is proposed, and Pix2pix network is introduced as the generation adversarial model. Our proposed model realizes the segmentation framework combining deep feature with multi-scale semantic feature. In order to improve the generalization ability and training accuracy of the model, this paper proposes a combination of the traditional multi-classification cross-entropy loss function with the content loss function of generator output and the adversarial loss function of discriminator output. A large number of qualitative and quantitative experimental results show that the performance of our proposed semantic segmentation algorithm is better than the existing algorithms, and can improve the segmentation efficiency while ensuring the space consistency of the semantics segmentation for abdominal CT images.

Highlights

  • Primary liver cancer, especially hepatocellular carcinoma, is one of the common malignant tumors, and is one of the leading causes of cancer death in the world

  • Deep learning algorithms represented by convolutional neural networks (CNN) have achieved significant performance improvements in image semantic segmentation, but these methods always suffer from spatial inconsistencies between the benchmark template and the segmentation results, which are partially attributed to the random error generated by the independent prediction process of the tag variable

  • Based on the basic framework of DeepLab v3, this paper introduced Pix2pix network as the Generative Adversarial Networks model, and realized the segmentation architecture based on deep features and multi-scale semantic features

Read more

Summary

INTRODUCTION

Especially hepatocellular carcinoma, is one of the common malignant tumors, and is one of the leading causes of cancer death in the world. Deep learning algorithms represented by convolutional neural networks (CNN) have achieved significant performance improvements in image semantic segmentation, but these methods always suffer from spatial inconsistencies between the benchmark template and the segmentation results, which are partially attributed to the random error generated by the independent prediction process of the tag variable. The DeepLab method introduced a fully connected conditional random field in the last layer of the deep network, and combined the response results at different scales to enhance the performance of target positioning. This method can be widely used for information combination of high-level deep features and low-level local features [5].

RELATED WORKS
SEGMENTATION MODEL
WEIGHTED LOSS FUNCTION
TRAINING PROCESS
PARAMETER SETTING AND EVALUATION CRITERIA
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call