Abstract

Unsupervised domain adaptation methods are used to train an effect model by utilizing available knowledge from a labeled source domain for solving tasks in an unlabeled target domain. The most difficult challenge is determining methods to reduce distribution discrepancies and extract the largest number of domain-invariant features between the source and target domains to improve model performance. With the aim of minimizing the domain shift and maximizing domain-invariant feature extraction, we propose a cross-domain structure learning (CDSL) method for visual data recognition, which incorporates global distribution alignment and local discriminative structure preservation to capture the common, underlying features between domains. Specifically, we design a simple but effective classwise structure learning strategy with a specific compactness hierarchy to promote intraclass knowledge transfer and reduce the risk of negative transfer between domains. We also extend CDSL to different kinds of kernelization to address complex situations in the real world. Extensive experiments on several visual data benchmarks demonstrate the effectiveness of our proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call