Abstract

Timely diagnosis plays a critical role in determining melanoma prognosis, prompting the development of deep learning models to aid clinicians. Questions persist regarding the efficacy of clinical images alone or in conjunction with dermoscopy images for model training. This study aims to compare the classification performance for melanoma of three types of CNN models: those trained on clinical images, dermoscopy images, and a combination of paired clinical and dermoscopy images from the same lesion. We divided 914 image pairs into training, validation, and test sets. Models were built using pre-trained Inception-ResNetV2 convolutional layers for feature extraction, followed by binary classification. Training comprised 20 models per CNN type using sets of random hyperparameters. Best models were chosen based on validation AUC-ROC. Significant AUC-ROC differences were found between clinical versus dermoscopy models (0.661vs. 0.869, p<0.001) and clinical versus clinical + dermoscopy models (0.661vs. 0.822, p=0.001). Significant sensitivity differences were found between clinical and dermoscopy models (0.513vs. 0.799, p=0.01), dermoscopy versus clinical + dermoscopy models (0.799vs. 1.000, p=0.02), and clinical versus clinical + dermoscopy models (0.513vs. 1.000, p<0.001). Significant specificity differences were found between dermoscopy versus clinical + dermoscopy models (0.800vs. 0.288, p<0.001) and clinical versus clinical + dermoscopy models (0.650vs. 0.288, p<0.001). CNN models trained on dermoscopy images outperformed those relying solely on clinical images under our study conditions. The potential advantages of incorporating paired clinical and dermoscopy images for CNN-based melanoma classification appear less clear based on our findings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call