Abstract
Accurate melanoma classification from dermoscopy images is challenging due to low contrasts between skin lesions and normal tissue regions. The intraclass variance of melanomas in terms of color, texture, shape, size, uncertain boundary, and location of lesions in dermoscopy images adds to the complexity. Artifacts like body hair, blood vessels, nerves, ruler, and ink marks in dermoscopy images also hinder the classification performance. In this work, an inpainting technique has been studied to handle artifacts. Transfer learning models, namely EfficientNet B4, B5, DenseNet121, and Inception-ResNet V2, have been studied. A study on ensembling the results of all the mentioned models is also performed. The robustness of the models has been tested using stratified K-fold cross-validation with test time augmentation (TTA). The trained models with the mentioned inpainting technique outperformed several deep learning solutions in the literature on the SIIM-ISIC Melanoma Classification Challenge dataset. EfficientNet B5 has achieved the best AUC of 0.9287 among the stand-alone models, and the ensemble solution has achieved an AUC of 0.9297.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.