Abstract

For large-scale CT images, the automatic segmentation of nodules is the foundation for diagnosis of various pulmonary diseases. Most existing methods have made great progress in pulmonary segmentation. But because of the similar structure between vessels and nodules in 2D, it lacks the ability to extract more discriminative features. The accuracy is still not satisfying. And the task remains challenging due to the lack of voxel labels and training strategies to balance foreground and background. To solve these problems, a 3D segmentation network of pulmonary nodules based on semi-supervised was proposed. Firstly, a framework of multi-view feature extraction was designed to solve the problem of high similarity between nodules and other tissues. It extracted features from three different views to improve precision. And three parallel dilated convolutions were added for multi-scale feature extraction. Hence, the spatial and semantic information of different sizes can be better obtained. Secondly, for the problem of identifying difficult samples, a hybrid loss function with an adjustment factor was proposed. It magnifies the loss of difficult samples, which will attract more attention from the network. And a new regularization term was introduced to avoid overfitting. The entire network was trained with a few labeled CT data set through an improved semi-supervised learning strategy, which was optimized with a new self-paced regularization. Experimental results show that the average sensitivity of the proposed method is 95.81%. It is superior to other methods in terms of precision and Dice index especially when the data set is not satisfied.

Highlights

  • Due to the tedious process and weak generalization of segmentation for pulmonary nodules based on artificial design features [1], algorithms based on deep learning have been widely studied

  • This paper proposed an improved method for pulmonary nodules segmentation of CT images

  • Since the dataset of LIDC-IDRI is relatively less than the other kinds, a regularization term was proposed to avoid the occurrence of overfitting

Read more

Summary

INTRODUCTION

Due to the tedious process and weak generalization of segmentation for pulmonary nodules based on artificial design features [1], algorithms based on deep learning have been widely studied. On the basis of extracting pulmonary nodules’ features from three views, the network obtains different scales through parallel dilated convolutions It makes full use of the three-view information of 3D nodules and improves the accuracy of detection. If the network uses a fixed scale for feature extraction, a large amount of redundant information may be generated when detecting small nodules, and it may be difficult to extract the spatial features when detecting large ones To solve this problem, three dilated convolutions are designed for multi-scale extraction to improve the accuracy of the RPN-like framework. A convolution with the kernel size of 1 × 1 × 1 is adopted to reduce the number of channels On this basis, a sliding window with a size of 3 × 3 × 3 is applied to the feature map to determine whether the original image area corresponding to its central pixel contains nodule tissue. Rseg and Rgt respectively represent the area of the segmentation result and the ground truth

CONTRAST OF FEATURE EXTRACTION WITH DIFFERENT VIEWS
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.