Abstract

Domain shift is defined as the mismatch between the marginal probability distributions of a source (training set) and a target domain (test set). A successful research line has been focusing on deriving new source and target feature representations to reduce the domain shift problem. This task can be modeled as a semi-supervised domain adaptation. However, without exploiting at the same time the knowledge available on the labeled source, labeled target, and unlabeled target data, semi-supervised methods are prone to fail. Here, we present a simple and effective Semi-Supervised Transfer Subspace (SSTS) method for domain adaptation. SSTS establishes pairwise constraints between the source and labeled target data, besides it exploits the global structure of the unlabeled data to build a domain invariant subspace. After reducing the domain shift by projecting both source and target domain onto this subspace, any classifier can be trained on the source and tested on target. Results on 49 cross-domain problems confirm that SSTS is a powerful mechanism to reduce domain shift. Furthermore, SSTS yields better classification accuracy than state-of-the-art domain adaptation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call