Abstract

Domain shift is defined as the mismatch between the marginal probability distributions of a source (training set) and a target domain (test set). A successful research line has been focusing on deriving new source and target feature representations to reduce the domain shift problem. This task can be modeled as a semi-supervised domain adaptation. However, without exploiting at the same time the knowledge available on the labeled source, labeled target, and unlabeled target data, semi-supervised methods are prone to fail. Here, we present a simple and effective Semi-Supervised Transfer Subspace (SSTS) method for domain adaptation. SSTS establishes pairwise constraints between the source and labeled target data, besides it exploits the global structure of the unlabeled data to build a domain invariant subspace. After reducing the domain shift by projecting both source and target domain onto this subspace, any classifier can be trained on the source and tested on target. Results on 49 cross-domain problems confirm that SSTS is a powerful mechanism to reduce domain shift. Furthermore, SSTS yields better classification accuracy than state-of-the-art domain adaptation methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.