Abstract

Supervised learning algorithms are to discover the hidden patterns of the statistics, assuming that the training data and the test data are from the same distribution. There are two challenges in the traditional supervised machine learning. One is that the test data distribution always differs largely from the training data distribution in the real world, while another is that there is usually very few labeled data to train a machine learning model. In such cases, transfer learning, which emphasizes the transfer of the previous knowledge from different but related domains and tasks, is recommended to deal with these problems. Traditional transfer learning methods care more about the data itself rather than the task. In fact, there is no one universal feature representation can perfectly benefit the model training work. But different feature representations can discover some independent latent knowledge from the original data. In this paper, we propose an instance-based transfer learning method, which is a weighted ensemble transfer learning framework with multiple feature representations. In our work, mutual information is applied as the smart weighting schema to measure the weight of each feature representation. Extensive experiments have been conducted on three facial expression recognition data sets: JAFFE, KDEF and FERG-DB. The experimental results demonstrate that our approach achieves better performance than the traditional transfer learning method and the non-transfer learning method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call