Abstract

Domain adaptation addresses the prediction problem in which the source and target data are sampled from different but related probability distributions. The key problem here lies in properly matching the distributions and learning general feature representation for training the prediction model. In this article, we introduce a Domain Invariant and Agnostic Adaptation (DIAA) solution, which matches the source and target joint distributions, and simultaneously aligns the feature and domain label joint distribution to its marginal product. In particular, DIAA matches and aligns the distributions via a feature transformation, and compares the two kinds of distribution disparities uniformly under the Kullback–Leibler (KL) divergence. To approximate the two corresponding KL divergences from observed samples, we derive a linear-regression-like technique that fits linear models to different ratio functions under the quadratic loss. With the estimated KL divergences, learning the DIAA feature transformation is formulated as solving a Grassmannian minimization problem. Experiments on text and image classification tasks with varied nature demonstrate the success of our approach.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.