Abstract

Labeled data scarcity of an interested domain is often a serious problem in machine learning. Leveraging the labeled data from other semantic-related yet co-variate shifted source domain to facilitate the interested domain is a consensus. In order to solve the domain shift between domains and reduce the learning ambiguity, unsupervised domain adaptation (UDA) greatly promotes the transferability of model parameters. However, the dilemma of over-fitting (negative transfer) and under-fitting (under-adaptation) is always an overlooked challenge and potential risk. In this paper, we rethink the shallow learning paradigm and this intractable over/under-fitting problem, and propose a safer UDA model, coined as Bilateral Co-Transfer (BCT), which is essentially beyond previous well-known unilateral transfer. With bilateral co-transfer between domains, the risk of over/under-fitting is therefore largely reduced. Technically, the proposed BCT is a symmetrical structure, with joint distribution discrepancy (JDD) modeled for domain alignment and category discrimination. Specifically, a symmetrical bilateral transfer (SBT) loss between source and target domains is proposed under the philosophy of mutual checks and balances. First, each target sample is represented by source samples with low-rankness constraint in a common subspace, such that the most informative and transferable source data can be used to alleviate negative transfer. Second, each source sample is symmetrically and sparsely represented by target samples, such that the most reliable target samples can be exploited to tackle under-adaptation. Experiments on various benchmarks show that our BCT outperforms many previous outstanding work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call