Abstract

While recent years have witnessed the remarkable success of deep learning methods in automated skin lesion detection systems, there still exists a gap between manual assessment of experts and automated evaluation of computers. The reason behind such a gap is the deep learning models demand considerable amounts of data, while the availability of annotated images is often limited. Data Augmentation (DA) is one way to mitigate the lack of labeled data; however, the augmented images intrinsically have a similar distribution to the original ones, leading to limited performance improvement. To satisfy the data lack in the real image distribution, we synthesize skin lesion images – realistic but completely different from the original ones – using Generative Adversarial Networks (GANs). In this paper, we propose the Self-attention Progressive Growing of GANs (SPGGANs) to generate fine-grained 256 × 256 skin lesion images for Convolutional Neural Network-based melanoma detection, which is challenging via conventional GANs; difficulties arise due to unstable GAN training with high resolution and a variety of skin lesions in size, shape, and location. In SPGGAN, details can be generated using aggregated information from all feature locations. Moreover, the discriminator can monitor that highly detailed features in distant portions of the image are consistent with each other. Furthermore, the Two-Timescale Update Rule (TTUR) is applied to SPGGAN (SPGGAN-TTUR) to improve stability while generating 256 × 256 skin lesion images. SPGGAN-TTUR is evaluated on data generation and classification tasks using the HAM10000 dataset. Our results confirm the importance of our proposed GAN-based DA approach for training skin lesion classifiers and indicate that it can lead to statistically significant improvements (p-value <0.05) in the sensitivity (recall) over non-augmented and augmented, with classical DA, counterparts. In general, in the case of all classes, The sensitivity improvements were 5.6% and 2.5% over non-augmented and augmented (with the best DA scheme) counterparts, respectively. Specifically, in the case of melanoma class, the sensitivity improvements were 13.8% and 8.6%. We believe that the proposed approach can be adopted in clinical practice to improve the sensitivity of automated skin lesion detection in dermoscopic images and thus support dermatologists’ efforts to improve melanoma diagnosis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call