Abstract

Video‐based facial expression recognition (FER) models have achieved higher accuracy with more computation, which is not suitable for online deployment in mobile intelligent terminals. Facial landmarks can model facial expression changes with their spatial location information instead of texture features. But classical convolution operation cannot make full use of landmark information. To this end, in this paper, we propose a novel long short memory network (LSTM) by embedding graph convolution named GELSTM for online video‐based FER in mobile intelligent terminals. Specifically, we construct landmark‐based face graph data from the client. On the server side, we introduce graph convolution which can effectively mine spatial dependencies information in a landmark‐based facial graph. Moreover, the extracted landmark's features are fed to LSTM for temporal feature aggregation. We conduct experiments on the facial expression dataset and the results show our proposed method shows superior performance compared to other deep models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call