Abstract

Micro-expressions are spontaneous, brief and subtle facial muscle movements that exposes underlying emotions. Motivated by recent exploits into deep learning for micro-expression analysis, we propose a lightweight dual-stream shallow network in the form of a pair of truncated CNNs with heterogeneous input features. The merging of the convolutional features allows for discriminative learning of micro-expression classes stemming from both streams. Using activation heatmaps, we further demonstrate that salient facial areas are well emphasized, and correspond closely to relevant action units belonging to emotion classes. We empirically validate the proposed network on three benchmark databases, obtaining state-of-the-art performance on the CASME II and SAMM while remaining competitive on the SMIC. Further observations point towards the sufficiency of utilizing shallower deep networks for micro-expression recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call