Abstract

Facial landmark localization and expression recognition are two important and highly relevant topics in facial analysis. However, few works focus on using the complementary information between the two tasks to improve the performance. In this paper, we propose a residual multi-task learning framework to predict the two tasks simultaneously. Different from previous multi-task learning methods which directly train a deep multi-task network with additional branches and losses, we propose a novel residual learning module to further strengthen the linkages between the two tasks. Benefit from the proposed residual learning module, one task can learn complementary information from the other task, leading to the performance promotion. Another problem for the multi-task learning is the lack of training data with multi-task labels. For example, there is no landmark localization annotation for the two widely-used FER dataset (AffectNet and RAF), vice versa. To solve this problem, we propose an association learning method to further enhance the connection between the two tasks. Based on this connection, the dataset with single-task labels can be used in the multi-task learning. Extensive experiments are conducted on four popular datasets (i.e. 300-W, AFLW for landmark localization and AffectNet, RAF for expression recognition), demonstrating the effectiveness of the proposed algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call