Abstract

Facial dynamics contain useful information for facial expression recognition (FER). However, exploiting dynamics in FER is challenging. This is mainly due to a variety of expression transitions. For example, video sequences belonging to a same emotion class may have different characteristics in transition duration and/or transition type (e.g., onset versus offset). The temporal mismatches between query and training video sequences could degrade the FER. This paper proposes a new partial matching framework that aims to overcome the temporal mismatch of expression transition. During the training stage, we construct an over-complete transition dictionary where many possible partial expression transitions are contained. During the test stage, we extract a number of partial expression transitions from a query video sequence. Each partial expression transition is analyzed individually. This increases the possibility of matching a partial expression transition in the query video sequence against the partial expression transitions in the over-complete transition dictionary. To make a partial matching subject-independent and robust to the temporal mismatch, each partial expression transition is defined as facial shape displacement between a pair of face clusters. Experimental results show that the proposed method is robust to variations of transition duration and transition type in subject-independent recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call