Abstract

The human skin, the largest organ with multiple layered functionalities, houses melanocytes in the deeper strata of its epidermis. These cells can be adversely impacted by ultraviolet radiation, thereby instigating melanoma, the deadliest form of skin cancer. Failure to detect melanoma at an early stage can potentially lead to metastasis, forming complex tumors in other tissues. Despite substantial efforts, visual inspections can occasionally overlook melanoma cases due to inherent subjectivity. To surmount this challenge, an automated detection system is necessary. Recent attempts to establish such a system have predominantly employed “push-through” strategies involving deep (neural) networks and their ensembles, which however necessitate significant computational resources. This paper presents a novel approach, amalgamating a conventional machine learning technique, Bag of Visual Words, with a pretrained deep neural network for comprehensive deep feature extraction from enhanced input image patches. The proposed method, assessed on the ISIC Challenge 2017 dataset, surpassed all other entries on the challenge leader-board, registering an accuracy of 96.2% in the task of lesion classification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.