Abstract

Unsupervised domain adaptation (UDA) is an emerging learning paradigm that models on unlabeled datasets by leveraging model knowledge built on other labeled datasets, in which the statistical distributions of these datasets are usually not identical. Formally, UDA is to leverage knowledge from a labeled source domain to promote an unlabeled target domain. Although there have been a variety of methods proposed to address the UDA problem, most of them are dedicated to single-source-to-single-target domain, while the works on single-source-to-multitarget domain are relatively rare. Compared to the single-source domain with single-target domain scenario, the UDA from single-source domain to multitarget domain is more challenging since it needs to consider not only the relationships between the source and the target domains but also those among the target domains. To this end, this article proposes a kind of dictionary learning-based unsupervised multitarget domain adaptation method (DL-UMTDA). In DL-UMTDA, a common dictionary is constructed to correlate the single-source and multitarget domains, while individual dictionaries are designed to exploit the private knowledge for the target domains. Through learning the corresponding dictionary representation coefficients in the UDA process, the correlations from the source to the target domains as well as these potential relationships between the target domains can be effectively exploited. In addition, we design an alternating algorithm to solve the DL-UMTDA model with theoretical convergence guarantee. Finally, extensive experiments on benchmark (Office + Caltech) and real datasets (AgeDB, Morph, and CACD) validate the superiority of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call