Abstract

Heterogeneous transfer learning has been proposed as a new learning strategy to improve performance in a target domain by leveraging data from other heterogeneous source domains where feature spaces can be different across different domains. In order to connect two different spaces, one common technique is to bridge feature spaces by using some co-occurrence data. For example, annotated images can be used to build feature mapping from words to image features, and then applied on text-to-image knowledge transfer. However, in practice, such co-occurrence data are often from Web, e.g. Flickr, and generated by users. That means these data can be sparse and contain personal biases. Directly building models based on them may fail to provide reliable bridge. To solve these aforementioned problems, in this paper, we propose a novel algorithm named MixedTransfer. It is composed of three components, that is, a cross domain harmonic function to avoid personal biases, a joint transition probability graph of mixed instances and features to model the heterogeneous transfer learning problem, a random walk process to simulate the label propagation on the graph and avoid the data sparsity problem. We conduct experiments on 171 real-world tasks, showing that the proposed approach outperforms four state-of-the-art heterogeneous transfer learning algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call