Abstract

In image classification, deep learning plays a crucial role. Its branch is now extending to medical image segmentation and classification, which has a wide breakthrough in different tasks on computer vision. Primarily, the creation of large-scale annotated datasets for such categorization remains a significant issue. The recently established generative adversarial network (GAN) can be used to create synthetic medical images for melanoma skin cancer. The fresh images that resulted are then fed into a convolutional neural network (CNN) with the improvized VGG-16 architecture model. convolutional neural network is well-known for its ability to classify images with greater accuracy. Initially, generative adversarial network architectures have been used to create high-quality skin lesions with region of interest. Then, utilizing CNN, a novel classification technique for skin lesions is proposed. Finally, we investigated the effectiveness between conventional data augmentation and our synthetic data-augmented skin lesion images. When the GAN-generated lesion datasets are used as input for the CNN, the accuracy is 96.33%. In this paper, the model achieves the results for the classification of melanoma skin lesion images as a computer-aided diagnosis method to the classes of benign (noncancerous), premalignant and malignant (cancerous) with a higher accuracy rate neglecting the complex preprocessing steps which ultimately reduces the execution time. In addition, we also explored the qualitative analysis of our synthetic images using visualization and expert assessment. We expect that more medical classification applications could benefit from our synthetic data augmentation, benefiting clinical practitioners in their initiatives to enhance diagnosis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call