Transfer learning focuses on leveraging the knowledge in source domains to complete the learning tasks in target domains, where the data distributions of the source and target domains are related but different in accordance with original features. To tackle the challenge of different data distributions, previous methods mine the high-level concepts (e.g., feature clusters) from original features, which shows to be suitable for the classification. The general strategies of the previous approaches are to utilize the identical concepts, the synonymous concepts or both of them as shared concepts to establish the bridge between the source and target domains. Besides the shared concepts, some methods use the different concepts for training model. Specifically, these methods assume that the identical concepts (e.g., feature clusters) in different domains can be mapped to the same example classes. However, some ambiguous concepts may exist in different domains and result in misleading classification in the target domains. Therefore, we need a general transfer learning framework, which can exploit four kinds of concepts including the identical concepts, the synonymous concepts, the different concepts and the ambiguous concepts simultaneously, for cross-domain classification.In this paper, we present a novel method, Quadruple Transfer Learning (QTL), which models these four kinds of concepts together to fit different situations on the data distributions. In addition, an iterative algorithm with convergence guarantee based on non-negative matrix tri-factorization techniques is presented to solve the optimization problem. Finally, systematic experiments demonstrate that QTL is more effective than all the compared baselines.
Read full abstract