Abstract

Despite recent advances in deep neural networks (DNNs), multi-task learning has not been able to utilize DNNs thoroughly. The current method of DNN design for a single task requires considerable skill in deciding many architecture parameters <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">a priori</i> before training begins. However, extending it to multi-task learning makes it more challenging. Inspired by findings from neuroscience, we propose a unified DNN modeling framework called <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">ConnectomeNet</i> that encompasses the best principles of contemporary DNN designs and unifies them with transfer, curriculum, and adaptive structural learning, all in the context of multi-task learning. Specifically, ConnectomeNet iteratively resembles connectome neuron units with a high-level topology represented as a general-directed acyclic graph. As a result, ConnectomeNet enables non-trivial automatic sharing of neurons across multiple tasks and learns to adapt its topology economically for a new task. Extensive experiments, including an ablation study, show that ConnectomeNet outperforms the state-of-the-art methods in multi-task learning such as the degree of catastrophic forgetting from sequential learning. For the degree of catastrophic forgetting, with normalized accuracy, our proposed method (which becomes 100%) overcomes mean-IMM (89.0%) and DEN (99.97%).

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.