Abstract

Multi-task learning (MTL) improves learning efficiency compared to the single-task counterpart in that it performs multiple tasks at the same time. Due to the nature, it can achieve generalized performance as well as alleviate overfitting. However, it does not efficiently perform resource-aware inference from a single trained architecture. To address the issue, we aim to build a learning framework that minimizes the cost to infer tasks under different memory budgets. To this end, we propose a multi-path network with a self-auxiliary learning strategy. The multi-path structure contains task-specific paths in a backbone network, where a lower-level path predicts earlier with a smaller number of parameters. To alleviate the performance degradation from earlier predictions, a self-auxiliary learning strategy is presented. The self-auxiliary tasks convey task-specific knowledge to the main tasks to compensate for the performance leak. We evaluate the proposed method on an extensive set of multi-task learning scenarios, including multiple tasks learning, hierarchical learning, and curriculum learning. The proposed method outperforms existing multi-task learning competitors for most scenarios about by a margin of 1% ~ 2% accuracy on average while consuming 30% ~ 60% smaller computational cost.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call