Abstract

Unsupervised domain adaptation (UDA) aims at transferring knowledge between a well-labelled ’source domain’ and an unlabelled ’target domain’ by decreasing distribution discrepancy. In real scenario, partial domain adaptation (PDA), where target domain only includes part of the classes of source domain, is adopted as fully-shared label space is often unavailable. Non-identical label spaces across domains lead to performance degradation due to source-unique classes being mis-matched to the target domain, i.e. negative transfer of the target domain. Although existing PDA approaches have produced promising results, they still confront with negative transfer problem without rigorous generalization bounds. In our work, a novel PDA model has been proposed based on margin disparity and maximum source intra-class density divergence (MDSD). It matches the feature distributions with shared labels and congregates source samples in the source with affirmative labels. Removing maximum target density with pseudo labels, it effectively avoids over-fitting and accelerates learning speed. In addition, we construct a new garbage classification dataset, which is comprised of source domain - Product (Pr) and target domain - Garbage (Ga) for PDA validation. Experiments show that our proposed unsupervised partial domain adaptation method has a good performance on Office-31, Office-Home and Pr-Ga datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call