Abstract

Multi-Task Learning (MTL) seeks to leverage the learning process of several tasks by solving them simultaneously to arrive at better models. This advantage is obtained by coupling the tasks together so that paths to share information among them are created. While Deep learning models have successfully been applied to MTL in different fields, the performance of deep approaches often depends on using large amounts of data to fit complex models with many parameters, something which may not be always feasible or, simply, they may lack some advantages that other approaches have. Kernel methods, such as Support Vector Machines or Gaussian Processes, offer characteristics such as a better generalization ability or the availability of uncertainty estimations, that may make them more suitable for small to medium size datasets. As a consequence, kernel-based MTL methods stand out among these alternative approaches to deep models and there also exists a rich literature on them. In this paper we review these kernel-based multi-task approaches, group them according to a taxonomy we propose, link some of them to foundational work on machine learning, and comment on datasets commonly used in their study and on relevant applications that use them.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call