Abstract
Deep learning has been used to analyze and diagnose various skin diseases through medical imaging. However, recent researches show that a well-trained deep learning model may not generalize well to data from different cohorts due to domain shift. Simple data fusion techniques such as combining disease samples from different data sources are not effective to solve this problem. In this paper, we present two methods for a novel task of cross-domain skin disease recognition. Starting from a fully supervised deep convolutional neural network classifier pre-trained on ImageNet, we explore a two-step progressive transfer learning technique by fine-tuning the network on two skin disease datasets. We then propose to adopt adversarial learning as a domain adaptation technique to perform invariant attribute translation from source to target domain in order to improve the recognition performance. In order to evaluate these two methods, we analyze generalization capability of the trained model on melanoma detection, cancer detection, and cross-modality learning tasks on two skin image datasets collected from different clinical settings and cohorts with different disease distributions. The experiments prove the effectiveness of our method in solving the domain shift problem.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.