Abstract

Micro-expressions (MEs) are subtle, quick and involuntary facial muscle movements. Action unit (AU) detection plays an important role in facial micro-expression analysis due to the ambiguity of MEs. Unlike typical AU detection that is performed on macro-expressions, the facial muscle movements are significantly more subtle in MEs. This makes the detection of AUs in MEs a difficult challenge with a limited number of previous studies. A common way to analyze subtle facial movements is to utilize the temporal changes between the sequence of frames, as subtle changes between static images are difficult to observe. Feature representations using motion magnification and optical flow are examples that can extract motion information from the temporal domain effectively. However, they are dependent on the chosen parameters and are computationally expensive. To address these issues, we propose Learnable Eulerian Dynamics (LED), capable of extracting motion representation efficiently. Rather than magnifying the motion like Eulerian video magnification, LED only extracts it. The parameters of the motion extraction are made learnable by using automatic differentiation in conjunction with a linearized version of the Eulerian video magnification. The extracted motion features are then further refined by convolutional layers. This enables the method to fine-tune the features by end-to-end training, leading to task-specific features that enhance performance on the downstream task (Code is publicly available in www.github.com/tvaranka/led ).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call