Abstract

Multi-task learning is a machine learning approach learning multiple tasks jointly while exploiting commonalities and differences across tasks. A shared representation is learned by multi-task learning, and what is learned for each task can help other tasks be learned better. Most of existing multi-task learning methods adopt deep neural network as the classifier of each task. However, a deep neural network can exploit its strong curve-fitting capability to achieve high accuracy in training data even when the learned representation is not good enough. This is contradictory to the purpose of multi-task learning. In this paper, we propose a framework named multi-task capsule (MT-Capsule) which improves multi-task learning with capsule network. Capsule network is a new architecture which can intelligently model part-whole relationships to constitute viewpoint invariant knowledge and automatically extend the learned knowledge to different new scenarios. The experimental results on large real-world datasets show MT-Capsule can significantly outperform the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call