Abstract

Facial action units (AUs) represent facial muscular activities, and our emotions can be expressed through their combinations. Thus, AU recognition is often used in many different applications, including marketing, healthcare, and education. Numerous studies have been conducted on recognizing AUs through several network architectures; however, their performances remain unsatisfactory. One of the difficulties comes from the lack of information regarding a neutral state (i.e., no facial muscular activities) of each person owing to the individuality of a neutral state. This lack of information degrades the recognition performance because the intensities of AUs are derived from a neutral state. In this paper, we propose a novel method using Pseudo-INtensities and their Transformation (PINT) to tackle this problem. To exclude the individuality of a neutral state and accurately capture the changes in facial appearance regarding AUs, we first calculate pseudo-intensities based only on the differences among the intensity states of the same person. We utilize a siamese network architecture and the facial image pairs of the same person to calculate the pseudo-intensities. These pseudo-intensities are then transformed into the actual intensities based on the low pseudo-intensities of the same person, which are considered to correspond to neutral states. We carried out evaluation experiments using two public datasets and found that our method, PINT, achieved a state-of-art performance. The improvements in the average intra-class correlation coefficient score over existing methods were 7.1% on DISFA dataset and 3.1% on FERA2017 dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call