Abstract

Multi-task learning aims to tackle various tasks with branched feature sharing architectures. Considering its diversity and complexity, discriminative feature representations need to be extracted for each individual task. Fixed geometric structures as a limitation of convolutional neural networks (CNNs) in building models, is also exists and poses a severe challenge in multi-task learning since the geometric variations will augment when we deal with multiple tasks. In this paper, we go beyond these limitations and propose a novel multi-task network by introducing the deformable convolution. Our design, the Deformable Multi-Task Network (DMTN), starts with a single shared network for constructing a shared feature pool. Then, we present task-specific deformable modules to extract discriminative features to be tailored for each task from the shared feature pool. The task-specific deformable modules utilize two new parts, deformable part and alignment part, to extract more discriminative task-specific features while greatly enhancing the transformation modeling capability. Experiments conducted on various multi-task learning types demonstrate the effectiveness of the proposed method. On multiple classification tasks, semantic segmentation and depth estimation tasks, our DMTN exceeds state-of-the-art approaches against strong baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call