AbstractA computer‐aided‐diagnostic system for diagnosing melanoma often uses distinct kinds of features for characterizing the lesions. Extracting distinct features from melanocytic images represents the various characteristics of pigmented lesions. Concatenating such features distinguishes extracted feature's information effectively while eliminating the redundant information amid them, which aids in discriminating the cancerous and noncancerous lesions. This article proposes a framework comprising segmentation, feature extraction, feature fusion, and classification to differentiate benign lesions from melanoma. The proposed framework is four‐fold: beginning with the extraction of ROI from an image, the SLICACO method is used for segmentation. Thereafter, ABCD rule‐based global and local features are extracted for effective melanoma detection. Further, we develop a new hybrid feature fusion strategy, PCAFA, leveraging the benefits of principal component analysis and factor analysis. The method performs early fusion by combining all extracted features within an individual feature vector and fed to a learning model for the prediction. While late fusion is performed using majority voting by combining the outputs of machine learning models. After that, gradient tree boosting, support vector machine, and decision tree models are trained to utilize distinct features of skin lesions for their classification as benign or malignant. The effectiveness of the designed framework is validated on the ISIC2017 benchmark skin lesion dataset based on specificity, sensitivity, and accuracy. The generalizability of the designed framework is gauged by conducting a fair comparison with conventional methods. Evaluated results reveal the potential of the proposed fused feature‐set in discriminating malignant and nonmalignant lesions with an accuracy of 96.8%.
Read full abstract