Abstract
Deep multitask learning (MTL) shares beneficial knowledge across participating tasks, alleviating the impacts of extreme learning conditions on their performances such as the data scarcity problem. In practice, participators stemming from different domain sources often have varied complexities and input sizes, for example, in the joint learning of computer vision tasks with RGB and grayscale images. For adapting to these differences, it is appropriate to design networks with proper representational capacities and construct neural layers with corresponding widths. Nevertheless, most of the state-of-the-art methods pay little attention to such situations, and actually fail to handle the disparities. To work with the dissimilitude of tasks' network designs, this article presents a distributed knowledge-sharing framework called tensor ring multitask learning (TRMTL), in which the relationship between knowledge sharing and original weight matrices is cut up. The framework of TRMTL is flexible, which is not only capable of sharing knowledge across heterogenous networks but also able to jointly learn tasks with varied input sizes, significantly improving performances of data-insufficient tasks. Comprehensive experiments on challenging datasets are conducted to empirically validate the effectiveness, efficiency, and flexibility of TRMTL in dealing with the disparities in MTL.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.