Abstract
Emotion recognition is an important part of affective computing. Human emotions can be described categorically or dimensionally. Accurate machine learning models for emotion classification and estimation usually depend on a large amount of annotated data. However, label acquisition in emotion recognition is costly: obtaining the ground-truth labels of an emotional sample usually requires multiple annotators’ assessments, which is expensive and time-consuming. To reduce the labeling effort in multi-task emotions recognition, the paper proposes an inconsistency measure that can indicate the difference between the labels estimated from the feature space and the label distribution of labeled dataset. Using the inconsistency as an indicator of sample informativeness, we further propose an inconsistency-based multi-task cooperative learning framework that integrates multi-task active learning and self-training semi-supervised learning. Experiments in two multi-task emotion recognition scenarios, multi-dimensional emotion estimation and simultaneous emotion classification and estimation, were conducted under this framework. The results demonstrated that the proposed multi-task active learning framework outperformed several single-task and multi-task active learning approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.