Abstract

Lung cancer is one of the most fatal malignant diseases, which poses an acute menace to human health and life. The accurate differential diagnosis of lung nodules is a vital step in the computed tomography (CT)-based noninvasive screening of lung cancer. Though deep learning-based methodologies have achieved good results in the task of nodule malignancy prediction, there are still two fundamental challenges that are required to be overcome, including insufficient labeled samples and the interferences of background tissues. Motivated by the above facts, a self-supervised transfer learning framework driven by visual attention (STLF-VA) is presented for benign–malignant identification of nodules on chest CT, which advocates using volumes containing the entire nodule objects as inputs to obtain discriminative features. Compared with traditional models that designed 2D natural image-based transfer learning models or learning from scratch 3D models, the proposed STLF-VA method can effectively alleviate the dependence on labeled samples by exploring the valuable information from 3D unlabeled CT scans in a coarse-to-fine self-supervised transfer learning fashion. Unlike the single attention mechanism, the multi-view aggregative attention (MVAA) module embedded in the STLF-VA architecture fully recalibrates the multi-layer feature maps from multiple attention angles, and can strengthen the anti-interference ability on background information. Moreover, a new dataset CQUCH-LND is constructed for evaluating the effectiveness of the STLF-VA model in clinical practice. Experimental results on the clinical dataset CQUCH-LND and the public dataset LIDC-IDRI indicate that the proposed STLF-VA framework achieves more competitive performance than some state-of-the-art nodule classification approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.