Abstract

Melanoma is the fastest growing and most lethal cancer among all forms of skin cancer. Deep learning methods, mainly convolutional neural networks (CNNs) have recently brought considerable attention in detecting skin cancers from dermoscopy images. However, learning valuable features by these methods has been challenging due to the inadequate training data, inter-class similarity, and intra-class variation in the skin lesions. In addition, most of these methods demand a considerable amount of parameters to tune. To address these issues, we present an automated framework that extracts visual features from dermoscopy images using a pre-trained deep CNN model and then employs a set of classifiers to detect melanoma. Recently, few pre-trained CNN architectures have been employed to extract deep features from skin lesions. However, a comprehensive analysis of such features derived from a variety of CNN architectures has not yet been performed for melanoma classification. Therefore, in this paper, we investigate the effectiveness of deep features extracted from eight contemporary CNN models. Also, we explore the impact of boundary localization and normalization techniques on melanoma detection. The suggested approach is evaluated using four benchmark datasets: PH2, ISIC 2016, ISIC 2017 and HAM10000. Experimental outcomes indicate that the DenseNet-121 with multi-layer perceptron (MLP) achieves a higher performance in terms of accuracy of 98.33%, 80.47%, 81.16% and 81% on PH2, ISIC 2016, ISIC 2017 and HAM10000 datasets compared to other CNN models and state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call