Abstract
Domain adaptation tries to adapt a model trained from a source domain to a different but related target domain. Currently, prevailing methods for domain adaptation rely on either instance reweighting or feature transformation. Unfortunately, instance reweighting has difficulty in estimating the sample weights as the dimension increases, whereas feature transformation sometimes fails to make the transformed source and target distributions similar when the cross-domain discrepancy is large. In order to overcome the shortcomings of both methodologies, in this article, we model the unsupervised domain adaptation problem under the generalized covariate shift assumption and adapt the source distribution to the target distribution in a subspace by applying a distribution adaptation function. Accordingly, we propose two frameworks: Bregman-divergence-embedded structural risk minimization (BSRM) and joint structural risk minimization (JSRM). In the proposed frameworks, the subspace distribution adaptation function and the target prediction model are jointly learned. Under certain instantiations, convex optimization problems are derived from both frameworks. Experimental results on the synthetic and real-world text and image data sets show that the proposed methods outperform the state-of-the-art domain adaptation techniques with statistical significance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Neural Networks and Learning Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.