Abstract
Computed tomography (CT) images are formally taken as an assistance of early diagnosis in lung nodule analysis. Thus the accurate lung nodule segmentation is in great need for image-driven tasks. However, as heterogeneity exists between different types of lung nodules, the similar visual appearance between the pixels of nodules and pixels of non-nodule area make it difficult for automatic lung nodule segmentation. In this article, we propose a fast end-to-end framework, called Fast Multi-crop Guided Attention (FMGA) network, to accurately segment lung nodules in CT images. Our method utilizes multi-crop nodule slices as input to aggregate contextual information (2D context from current image slice and 3D context from adjacent axial slices), and exploits a global convolutional layer for nodule pixel embedding matching. To further make use of the information from border pixels near the nodule margin for better segmentation, we develop a weighted loss function to facilitate the model training by considering a balanced class samples of pixels around the nodule margin. Moreover, we utilize a central pooling layer to facilitate the contexts feature propagation in pixel neighbors. We evaluate our method on the largest public lung CT dataset LIDC and the collected lung CT data from Wuhan local hospital, respectively. Experimental results show that FMGA achieves superior performance among the state-of-the-arts. In addition, we give an ablation study and visualization results to illustrate how each component works for accurate lung nodule segmentation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Emerging Topics in Computational Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.