Abstract

Transfer learning is proposed to transfer the learned knowledge from source domains to target domains where the target ones own fewer training data. Multitask learning learns multiple tasks simultaneously and makes use of the relationship among these tasks. Both of these learning methods can combine with the multiview learning, which exploits the information from the consistency of diverse views. In this chapter, we introduce four multiview transfer learning methods and three multiview multitask learning methods. We review research on multiview transfer learning under the large margin framework, discuss multiview discriminant transfer learning in detail, and introduce how to adapt Adaboost into multiview transfer learning. Three multiview multitask learning methods concentrate on the shared structures between tasks and views. The most natural way is to represent the relationships based on the bipartite graph and use an iterative algorithm to optimize its objective function. Another method constructs additional regularization function to ensure the view consistency. In general, convex shared structure learning algorithm provides structure parameters to share information. Besides, we introduce other methods; as supplements, where multi-transfer, multitask multiview discriminant analysis, and clustering are briefly mentioned.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call