Abstract

Unsupervised domain adaptation aims to use labeled instances from a source domain to train a good learning model, which can classify unlabeled instances from a target domain as accurate as possible. The biggest challenge is that datasets from the source and target domains have different distributions, thus the general classification model trained on the source domain can not perform well on the target domain data. The classic methods solve this problem mainly by narrowing the distance between the source and target domains. Those methods, however, is not optimal since the nonlinear feature space may not match the kernel-based learning machine. In this paper, we design a new method called bi-adapt kernel learning (BAKL) to learn a domain-invariant kernel by transferring the source and target domains to each other simultaneously. Specifically, we derive the new source and target domain kernel matrix according to the Mercer’s theorem. The domain-invariant kernel machines are then constructed by minimizing the approximation error between the newly generated kernel matrices and the ground truth source domain kernel matrices. Experiments on benchmark tasks of text and object recognition demonstrate that it significantly improves classification accuracy compared to the state-of-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call