Accurate genotyping of the epidermal growth factor receptor (EGFR) is critical for the treatment planning of lung adenocarcinoma. Currently, clinical identification of EGFR genotyping highly relies on biopsy and sequence testing which is invasive and complicated. Recent advancements in the integration of computed tomography (CT) imagery with deep learning techniques have yielded a non-invasive and straightforward way for identifying EGFR profiles. However, there are still many limitations for further exploration: 1) most of these methods still require physicians to annotate tumor boundaries, which are time-consuming and prone to subjective errors; 2) most of the existing methods are simply borrowed from computer vision field which does not sufficiently exploit the multi-level features for final prediction. To solve these problems, we propose a Denseformer framework to identify EGFR mutation status in a real end-to-end fashion directly from 3D lung CT images. Specifically, we take the 3D whole-lung CT images as the input of the neural network model without manually labeling the lung nodules. This is inspired by the medical report that the mutational status of EGFR is associated not only with the local tumor nodules but also with the microenvironment surrounded by the whole lung. Besides, we design a novel Denseformer network to fully explore the distinctive information across the different level features. The Denseformer is a novel network architecture that combines the advantages of both convolutional neural network (CNN) and Transformer. Denseformer directly learns from the 3D whole-lung CT images, which preserves the spatial location information in the CT images. To further improve the model performance, we designed a combined Transformer module. This module employs the Transformer Encoder to globally integrate the information of different levels and layers and use them as the basis for the final prediction. The proposed model has been tested on a lung adenocarcinoma dataset collected at the Affiliated Hospital of Zunyi Medical University. Extensive experiments demonstrated the proposed method can effectively extract meaningful features from 3D CT images to make accurate predictions. Compared with other state-of-the-art methods, Denseformer achieves the best performance among current methods using deep learning to predict EGFR mutation status based on a single modality of CT images.
Read full abstract