Abstract

Multi-task learning has attracted much attention in recent years, where the goal is to learn multiple tasks by exploiting the similarities and differences between the tasks. Previous researches on multi-task learning mainly focus on flexible methods for feature sharing (e.g., soft sharing) under resource-sufficient settings (e.g., on GPU servers). However, in many real-world applications, we often need to deploy multi-task learning models on resource-constrained platforms (e.g., mobile devices). The high resource requirement of soft-sharing methods can make them hard to deploy on mobile devices. In this paper, we study the problem of Resource-efficient Multi-Task Learning (MTL), where the goal is to design a resource-friendly model that suits resource-constrained inference environment, e.g., security camera or mobile devices. We formulate the Resource-efficient MTL problem as a fine-grained filter sharing problem, i.e., learning how to share filters at any given convolutional layers among multiple tasks. We proposed a novel solution for parameter sharing, called FiShNet. Different from soft-sharing approaches, where the computational cost per task is growing w.r.t. the number of other tasks, FiShNet can achieve high accuracy comparable to soft-sharing approaches, while only consuming a constant computational cost per task. Different from hard-sharing approaches, where the parameter sharing structures are hand-picked, FiShNet can learn how to share parameters directly on the training data with finer-grained sharing. We evaluate FiShNet on a number of problem settings and datasets for multi-task learning. We show that FiShNet achieves high accuracy when compared with state-of-the-art methods in multi-task learning, while only requiring a fraction of the computational resource.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.