Abstract
The facial expression is one of the most common ways to reflect human emotions. And understand different classes of facial expressions is an important method in analyzing human perceived and affective states. In the past few decades, facial expression analysis (FEA) has been extensively studied. It illustrates few of the facial expressions are exactly individual of the predefined affective states but are blends of several basic expressions. Some researchers have realized that facial expression recognition can be treated as a multi-label task, but they are still troubled by the inaccurate recognition of multi-label expressions. To overcome this challenge, a novel multi-feature joint learning ensemble framework, called MF-JLE framework, is proposed. The proposed framework combines global features with several different local key features to consider the multiple labels of expressions embodied in many facial action units. The ensemble learning is introduced into the framework, combines the global module and the local module on the loss, and carries out the joint iterative optimization. The ensemble of the whole framework improves the accuracy of multi-label recognition of different modules as weak classifiers. In addition, the traditional multi-classifier cross-entropy loss has been replaced by the binary cross-entropy loss for a better ensemble. The proposed framework is evaluated on the real-world affective faces (RAF-ML) dataset. The experimental results show that the proposed model is better than other methods in both quantitative and qualitative aspects, whether compared with traditional shallow learning methods or recent deep learning methods.
Highlights
In recent years, people have become increasingly interested in improving all aspects of human-computer interaction
We introduced action units to multi-label facial expression recognition, and propose a novel and robust ensemble framework combining global and local features for the first time to cope with the complexity of multiple facial expressions
The single ResNet has the same structure as each module in the multi-feature joint learning ensemble (MFJLE) framework and takes the entire facial image as input
Summary
People have become increasingly interested in improving all aspects of human-computer interaction. As an indispensable way of human communication, can convey abundant information about human emotions. Facial expressions are most commonly used in daily communication between people of each other. The smile to show greeting, the frown corresponding confuse, and open their mouths when surprised. The fact which we comprehend emotions and how to react to other people’s expressions abundantly enriches the interaction. Researchers attempt to analyze facial expressions, which try to comprehend and classify these emotions
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.