Abstract
Melanoma is the most lethal of all skin cancers. This necessitates the need for a machine learning-driven skin cancer detection system to help medical professionals with early detection. We propose an integrated multi-modal ensemble framework that combines deep convolution neural representations with extracted lesion characteristics and patient meta-data. This study intends to integrate transfer-learned image features, global and local textural information, and patient data using a custom generator to diagnose skin cancer accurately. The architecture combines multiple models in a weighted ensemble strategy, which was trained and validated on specific and distinct datasets, namely, HAM10000, BCN20000 + MSK, and the ISIC2020 challenge datasets. They were evaluated on the mean values of precision, recall or sensitivity, specificity, and balanced accuracy metrics. Sensitivity and specificity play a major role in diagnostics. The model achieved sensitivities of 94.15%, 86.69%, and 86.48% and specificity of 99.24%, 97.73%, and 98.51% for each dataset, respectively. Additionally, the accuracy on the malignant classes of the three datasets was 94%, 87.33%, and 89%, which is significantly higher than the physician recognition rate. The results demonstrate that our weighted voting integrated ensemble strategy outperforms existing models and could serve as an initial diagnostic tool for skin cancer.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.