Abstract

Multi-task learning which is a branch of deep learning has received extensive attention and in-depth research. However, there are still some difficult problems such as unclear feature sharing, indistinguishable related tasks and overly complex network structure. Therefore, a multi-task learning approach based on hybrid sharing and network optimization is presented. Firstly, training data is fed into the hard parameter sharing network for hybrid training without distinguishing tasks, then the similarity of tasks is measured according to the gradient changes of each task in sharing network layers. Secondly, similar tasks are divided into the same group which is represented by a hard parameter sharing network, while tasks with weak correlation or large differences are divided into different groups which are characterized by soft parameter sharing network. Moreover, it gives a new network training method combining hybrid and alternating, so as to take full advantages of approaches based on the task-level and feature-level. Thirdly, according to the differences of features extracted from the shared layers and the gradient changes in deep layers, the relevant activation value is adjusted and the network is optimized, which not only maintain the conciseness of the network structure, but also help to solve the non-equilibrium problem of data during multi-task learning. Finally, the feasibility and effectiveness of this approach is verified through the applications of MNIST data set and iris and balance data in the UCI data set.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call