Abstract

Automatic Dynamic Facial Expression Recognition (DFER) is a challenging task, since how to effectively capture facial temporal dynamics is still an open problem. In this article, we regard variations of facial expressions as a dynamic system in accord with certain rules, and try to explore the fundamental temporal properties for recognizing dynamic expressions. Inspired by the phase space reconstruction method for time series analysis, we propose a novel network named Phase Space Reconstruction Network (PSRNet) for learning spatio-temporal features of facial expressions. First, 3D convolutional neural networks are used to extract spatial and short-term temporal features, which indicate the state of each frame and are termed as observations in the phase space. All the observations compose the trajectory of the dynamical system. Then, a data-driven across-correlation matrix is inferred to reveal the relationship of the observations. With this matrix, the phase space reconstruction module reconstructs the trajectory by aggregating the observations adaptively in the phase space. Reconstructed observations represent the gradual process of dynamic facial expressions, which is beneficial to recognize these expressions. The experiment results on three databases (Oulu, MMI, and CK+) demonstrate that the proposed PSRNet can extract more informative and representative spatio-temporal features for DFER. Moreover, the visualization of intermediate features reveals that the reconstructed features have global consistency in facial regions and the underlying evolutionary pattern of dynamic facial expression.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.