AbstractOne of the precursors of lung cancer is the presence of lung nodules, and accurate identification of their benign or malignant nature is important for the long-term survival of patients. With the development of artificial intelligence, deep learning has become the main method for lung nodule classification. However, successful deep learning models usually require large number of parameters and carefully annotated data. In the field of medical images, the availability of such data is usually limited, which makes deep networks often perform poorly on new test data. In addition, the model based on the linear stacked single branch structure hinders the extraction of multi-scale features and reduces the classification performance. In this paper, to address this problem, we propose a lightweight interleaved fusion integration network with multi-scale feature learning modules, called MIFNet. The MIFNet consists of a series of MIF blocks that efficiently combine multiple convolutional layers containing 1 × 1 and 3 × 3 convolutional kernels with shortcut links to extract multiscale features at different levels and preserving them throughout the block. The model has only 0.7 M parameters and requires low computational cost and memory space compared to many ImageNet pretrained CNN architectures. The proposed MIFNet conducted exhaustive experiments on the reconstructed LUNA16 dataset, achieving impressive results with 94.82% accuracy, 97.34% F1 value, 96.74% precision, 97.10% sensitivity, and 84.75% specificity. The results show that our proposed deep integrated network achieves higher performance than pre-trained deep networks and state-of-the-art methods. This provides an objective and efficient auxiliary method for accurately classifying the type of lung nodule in medical images.
Read full abstract