Abstract

Facial micro-expression (ME) can disclose genuine and concealed human feelings. It makes MEs extensively useful in real-world applications pertaining to affective computing and psychology. Unfortunately, they are induced by subtle facial movements for a short duration of time, which makes the ME recognition, a highly challenging problem even for human beings. In automatic ME recognition, the well-known features encode either incomplete or redundant information, and there is a lack of sufficient training data. The proposed method, Micro-Expression Recognition by Analysing Spatial and Temporal Characteristics, MERASTC mitigates these issues for improving the ME recognition. It compactly encodes the subtle deformations using action units (AUs), landmarks, gaze, and appearance features of all the video frames while preserving most of the relevant ME information. Furthermore, it improves the efficacy by introducing a novel neutral face normalization for ME and initiating the utilization of gaze features in deep learning based ME recognition. The features are provided to the 2D convolutional neural network that jointly analyses the spatial and temporal behavior for correct ME classification. Experimental results on the publicly available datasets indicate that the proposed method exhibits better performance than the well-known methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call