Abstract

To benefit the skin care, this paper aims to design an automatic and effective visual analysis framework, with the expectation of recognizing the skin disease from a given image conveying the disease affected surface. This task is nontrivial, since it is hard to collect sufficient well-labeled samples. To address such problem, we present a novel transfer learning model, which is able to incorporate external knowledge obtained from the rich and relevant Web images contributed by grassroots. In particular, we first construct a target domain by crawling a small set of images from vertical and professional dermatological websites. We then construct a source domain by collecting a large set of skin disease related images from commercial search engines. To reinforce the learning performance in the target domain, we initially build a learning model in the target domain, and then seamlessly leverage the training samples in the source domain to enhance this learning model. The distribution gap between these two domains are bridged by a linear combination of Gaussian kernels. Instead of training models with low-level features, we resort to deep models to learn the succinct, invariant, and high-level image representations. Different from previous efforts that focus on a few types of skin diseases with a small and confidential set of images generated from hospitals, this paper targets at thousands of commonly seen skin diseases with publicly accessible Web images. Hence the proposed model is easily repeatable by other researchers and extendable to other disease types. Extensive experiments on a real-world dataset have demonstrated the superiority of our proposed method over the state-of-the-art competitors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call