Abstract

Transfer learning (TL) has been proposed as an effective way to learn the classifier of the new domain using the rich supervision information available in a variety of other related domains. For successful transfer of information from related domains to the new domain, existing TL methods attempt to minimize both the geometric shift and distribution shift of the domains, and preserve the source domain discriminant information and the original similarity of the data. However, these methods fail to preserve target domain discriminative information due to the unavailability of target domain labeled data. Furthermore, they consider initial space data that may be unnecessary or irrelevant to the final classification. To this end, a novel Discriminative Information Preservation: a general framework for unsupervised visual Domain Adaptation (DIPDA) is proposed in this paper. Specifically, DIPDA considers low dimensional manifolds and preserves the target domain discriminative information (i.e., maximizing between-class variance and minimizing within-class variance) along with other important objectives. As a result, the most informative features can be extracted and the complexity of the model is significantly reduced. Moreover, the proposed DIPDA method has been extended to nonlinear problems in a Reproducing Kernel Hilbert Space (RKHS) using linear and RBF kernel functions. Compared with several state-of-the-art primitive, shallow and deep domain adaptation methods, DIPDA substantially improves the classification results on four widely used real-world domain adaptation datasets, which verifies the effectiveness and efficiency of the proposed methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call