Deep learning (DL) techniques have been extensively applied in medical image classification. The unique characteristics of medical imaging data present challenges, including small labeled datasets, severely imbalanced class distribution, and significant variations in imaging quality. Recently, generative adversarial network (GAN)-based classification methods have gained attention for their ability to enhance classification accuracy by incorporating realistic GAN-generated images as data augmentation. However, the performance of these GAN-based methods often relies on high-quality generated images, while large amounts of training data are required to train GAN models to achieve optimalperformance. In this study, we propose an adversarial learning-based classification framework to achieve better classification performance. Innovatively, GAN models are employed as supplementary regularization terms to support classification, aiming to address the challenges describedabove. The proposed classification framework, GAN-DL, consists of a feature extraction network (F-Net), a classifier, and two adversarial networks, specifically a reconstruction network (R-Net) and a discriminator network (D-Net). The F-Net extracts features from input images, and the classifier uses these features for classification tasks. R-Net and D-Net have been designed following the GAN architecture. R-Net employs the extracted feature to reconstruct the original images, while D-Net is tasked with the discrimination between the reconstructed image and the original images. An iterative adversarial learning strategy is designed to guide model training by incorporating multiple network-specific loss functions. These loss functions, serving as supplementary regularization, are automatically derived during the reconstruction process and require no additional dataannotation. To verify the model's effectiveness, we performed experiments on two datasets, including a COVID-19 dataset with 13958 chest x-ray images and an oropharyngeal squamous cell carcinoma (OPSCC) dataset with 3255 positron emission tomography images. Thirteen classic DL-based classification methods were implemented on the same datasets for comparison. Performance metrics included precision, sensitivity, specificity, and -score. In addition, we conducted ablation studies to assess the effects of various factors on model performance, including the network depth of F-Net, training image size, training dataset size, and loss function design. Our method achieved superior performance than all comparative methods. On the COVID-19 dataset, our method achieved , , , and in terms of precision, sensitivity, specificity, and -score, respectively. It achieved across all these metrics on the OPSCC dataset. The study to investigate the effects of two adversarial networks highlights the crucial role of D-Net in improving model performance. Ablation studies further provide an in-depth understanding of ourmethodology. Our adversarial-based classification framework leverages GAN-based adversarial networks and an iterative adversarial learning strategy to harness supplementary regularization during training. This design significantly enhances classification accuracy and mitigates overfitting issues in medical image datasets. Moreover, its modular design not only demonstrates flexibility but also indicates its potential applicability to various clinical contexts and medical imaging applications.
Read full abstract