Abstract
This paper proposes a new convolutional neural network (CNN) with multi-scale processing for detecting ground-glass opacity nodules (GGO) in 3D computed tomography (CT) images, which is referred to as PiaNet for short. PiaNet consists of a feature-extraction module and a prediction module. The former module is constructed by introducing pyramid multi-scale source connections into a contracting-expanding structure. Besides, a new multi-receptive-field convolution block (MRCB) is presented to fuse the convolutions with multiple kernels of varying sizes for capturing features in each scale of information better. The latter module includes a bounding-box regressor and a classifier that are employed to simultaneously recognize GGO nodules and estimate bounding boxes at multiple scales. To train the proposed PiaNet, a two-stage transfer learning strategy is developed. In the first stage, the feature-extraction module is embedded into a classifier network that is trained on a large data set of GGO and non-GGO patches, which are generated by performing data augmentation from a small number of annotated CT scans. In the second stage, the pretrained feature-extraction module is loaded into PiaNet, and then PiaNet is fine-tuned using the annotated CT scans. We evaluate the proposed PiaNet with the LIDC-IDRI dataset. The experimental results demonstrate that our method outperforms state-of-the-art counterparts, including the Subsolid CAD and Aidence systems and CPM-Net and S4ND and GA-SSD methods. PiaNet achieves a sensitivity of 93.6% with only one false positive per scan.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.