Abstract

Transfer learning can address the learning tasks of unlabeled data in the target domain by leveraging plenty of labeled data from a different but related source domain. A core issue in transfer learning is to learn a shared feature space where the distributions of the data from the two domains are matched. This learning process can be named as transfer representation learning (TRL). Feature transformation methods are crucial to ensure the success of TRL. The most commonly used feature transformation method in TRL is kernel-based nonlinear mapping to the high-dimensional space, followed by linear dimensionality reduction. But the kernel functions are lack of interpretability, and it is difficult to select kernel functions. To this end, this article proposes a more intuitive and interpretable method, called TRL with TSK-FS (TRL-TSK-FS), by combining TSK fuzzy system (TSK-FS) with transfer learning. Specifically, TRL-TSK-FS realizes TRL from two aspects. On one hand, the data in the source and target domains are transformed into the fuzzy feature space where the distribution distance of the data between the two domains is minimized. On the other hand, discriminant information and geometric properties of the data are preserved by linear discriminant analysis and principal component analysis. A further advantage is that nonlinear transformation is realized in the proposed method by constructing fuzzy mapping with the antecedent part of the TSK-FS instead of kernel functions, which are difficult to be selected. Extensive experiments are conducted on text and image datasets to demonstrate the superiority of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call