Abstract

Semi-supervised heterogeneous domain adaptation (SsHeDA) aims to train a classifier for the target domain, in which only unlabeled and a small number of labeled data are available. This is done by leveraging knowledge acquired from a heterogeneous source domain. From algorithmic perspectives, several methods have been proposed to solve the SsHeDA problem; yet there is still no theoretical foundation to explain the nature of the SsHeDA problem or to guide new and better solutions. Motivated by compatibility condition in semi-supervised probably approximately correct (PAC) theory, we explain the SsHeDA problem by proving its generalization error - that is, why labeled heterogeneous source data and unlabeled target data help to reduce the target risk. Guided by our theory, we devise two algorithms as proof of concept. One, kernel heterogeneous domain alignment (KHDA), is a kernel-based algorithm; the other, joint mean embedding alignment (JMEA), is a neural network-based algorithm. When a dataset is small, KHDA's training time is less than JMEA's. When a dataset is large, JMEA is more accurate in the target domain. Comprehensive experiments with image/text classification tasks show KHDA to be the most accurate among all non-neural network baselines, and JMEA to be the most accurate among all baselines.

Highlights

  • T RADITIONAL supervised learning theories [1] are based on two assumptions: 1) that the training and test data are from the same distribution [2], [3]; and 2) that sufficient labeled training data are available [4], [5]

  • Compared to all neural network baselines, joint mean embedding alignment (JMEA) works the best for all tasks (11/12) and the average accuracy of JMEA achieves an improvement at least 2.5%

  • 3) Except for Domain adaptation with manifold alignment (DAMA), baselines SHFA, GJDA, Cross domain landmarks selection (CDLS), domain adaptation by covariance matching (DACoM), Transfer neural trees (TNT) and soft transfer network (STN) achieve better mean performance than 1NN and SVMt. This indicates that the baselines SHFA, Generalized joint distribution adaptation (G-JDA), CDLS, DACoM, TNT and STN can transfer knowledge from the source data to the target data

Read more

Summary

Introduction

T RADITIONAL supervised learning theories [1] are based on two assumptions: 1) that the training and test data are from the same distribution [2], [3]; and 2) that sufficient labeled training data are available [4], [5]. It is not easy to find a source domain with the same feature space as the target domain of interest [17], [18], [19], [20], [21], [22]; the source and target domains might be from different feature spaces To track this issue, researchers have proposed a challenging problem: semi-supervised heterogeneous DA (SsHeDA) [23], [24], where the source and target domains have different feature spaces, while unlabeled and just a few labeled target data are available in the target domain. Though many practical SsHeDA algorithms have been proposed [25], [26], [27], [28], very little theoretical groundwork has been undertaken to reveal the nature of the SsHeDA problem or why the current solutions work as they do [14]

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.