Abstract

AbstractNowadays, skin cancer has become a common disease and is growing worldwide at an increasing rate. Its manual examination by dermatologists demands significant time and cost in terms of instruments. Also, practical diagnosis demands experienced and skilled dermatologists. These challenges show the impracticality of manual diagnosis over the increased rate of skin cancer patients and thus demand robust end-to-end computer-aided diagnosis (CAD) methods. This paper proposes a deep learning-based skin lesion classification approach that utilizes the visual attention-based mechanism over Convolutional Neural Networks (CNNs) to improve visual context. We use the information from skin lesion images and patient demographics to enhance visual attention, which further improves classification. The proposed method accurately classifies deadly melanoma skin cancer for the PAD-UFES-20 dataset, an essential but challenging task. Our proposed approach has been evaluated over multimodel data, i.e., clinical and dermoscopic images, using two publicly available datasets named PAD-UFES-20 and ISIC-2019. During experimentation, our approach surpasses the available state-of-the-art techniques over five commonly used Convolutional Neural Networks (CNNs) architectures which validate its generalizability and applicability in different scenarios. Our approach achieved efficient performance for small datasets like PAD-UFES-20 using a lightweight model (MobileNet), making it suitable for the CAD system. The effectiveness of our method has been shown by various quantitative and qualitative measures, which demonstrate its efficacy in addressing challenging lesion diagnoses. Our source code is publicly available to reproduce the work.KeywordsMultimodal fusionComputer-aided diagnosisAttention mechanism

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call