Abstract

Deep learning and computer vision have achieved remarkable success in many areas of machine learning and medical diagnostics. However, there is still a remarkable gap between dermatologists' skin cancer diagnosis and reliable computer-aided melanoma detection. There are several reasons behind this gap, and the availability of insufficient data for training deep learning networks is one of them. Data augmen-tation is a popular technique to increase training data manifolds to mitigate the lack of data. In this paper, a conditional generative adversarial network (CGAN) is proposed to produce high-resolution synthetic images to augment the training data and gain higher performance of skin cancer detection systems. The artificial generation of images resembling real images is a difficult task owing to unstable information present in the skin lesions such as irregular borders, diameter, shape, color, and texture. The generator module of CGAN is designed to aggregate the information from all feature layers and produce synthetic images. Additionally, the generator incorporates the auxiliary information along with image inputs to map latent feature components successfully. The network is trained on 10,015 skin cancer images taken from the International Skin Imaging Collaboration (ISIC 2018). It was concluded from the experiments that the proposed model obtained better classi-fication performance as compared to the imbalanced original dataset and other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call