Abstract

In machine learning, the relatedness across multiple tasks is usually complex and entangled. Due to dataset bias, the relatedness among tasks might be distorted and mislead the training of the models with solid learning ability, such as the multi-task neural networks. In this paper, we propose the idea of Relatedness Refinement Multi-Task Learning (RRMTDL) by introducing adversarial learning in the multi-task deep neural network to tackle the problem. The RRMTDL deep learning model restrains the misleading relatedness task by adversarial training and extracts information sharing across tasks with valuable relatedness. With RRMTDL, multi-task deep learning can enhance the task-specific representation for the major tasks by excluding the misleading relatedness. We design tests with various combinations of task-relatedness to validate the proposed model. Experimental results show that the RRMTDL model can effectively refine the task relatedness and prominently outperform other multi-task deep learning models in datasets with entangled task labels.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call