Abstract

When the training data and test data are drawn from similar but different data distributions, transfer learning (TL) can be exploited to learn a consistent distribution for knowledge transfer. To reduce distribution differences, some recent transfer learning approaches typically build potential feature spaces to exploit the potential information and learn multiple high-level concepts to model a latent potential shared structure. However, only utilizing the potential information in one latent space will neglect some other potential information existing in different latent feature spaces. And this neglected potential information may also help model potential structures shared as bridges. In this paper, we propose Multiple Latent Spaces Learning (MLSL), a novel approach which mines a massive amount of potential information on multiple latent spaces to construct a shared bridge (or multiple bridges) across domains by learning different high-level concepts. Our strategy can dig out the latent information that exists in the latent space ignored by the previous methods to build a knowledge transfer bridge. Compared with the TL method that only learns a latent space, our strategy is more suitable for actual scenarios, and the use of data is also fuller. In addition, an iterative algorithm is developed to solve the optimization problem. Finally, the system test on benchmark data sets shows the superiority of the MLSL method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.