Chest X-ray (CXR) imaging is one of the most common diagnostic imaging techniques in clinical diagnosis and is usually used for radiological examinations to screen for thorax diseases. In this paper, we propose a novel computer-aided diagnosis (CAD) system based on a hybrid deep learning model composed of a convolutional neural network (CNN) and a graph neural network (GNN). The system is intended to explore implicit correlations between thorax diseases to aid in the multilabel chest X-ray image classification task, which we term ‶CheXGAT‶. Specifically, the proposed CheXGAT framework comprises two main modules: an image representation learning (IRL) module and a graph representation learning (GRL) module. We employ the IRL module to learn high-level visual representation features from the CXR image. From the GRL module, the self-attention mechanism aggregates neighborhood features from the graphic structure to enhance the implicit correlation between thorax diseases. We adopted a data-driven method to create a disease correlation matrix that works on the message passing and aggregation process for the nodes in the GRL module. After end-to-end training, the GRL module enhances the correlation between thorax diseases to improve diagnosis performance. We performed experiments on the NIH Chest X-ray14 dataset, which contains 112,120 frontal-view radiographs; each image has multiple thorax disease labels. In the experimental results, the average AUC score of our proposed CheXGAT model reached 0.8266, and the AUC scores of Emphysema and Hernia reached 0.9447 and 0.9313, respectively. In addition, we visualized explanations via a gradient-based localization method in our proposed CheXGAT framework. Compared with previous studies, the experimental results show the competitive performance of our framework. Specifically, we propose a CAD system that uses a hybrid model to help radiologists identify CXR images from 14 chest diseases for clinical diagnosis.
Read full abstract