Abstract

AbstractMicro‐expressions are spontaneous and unconscious facial movements that reveal individuals’ genuine inner emotions. They hold significant potential in various psychological testing fields. As the face is a 3D deformation object, the emergence of facial expression leads to spatial deformation of the face. However, existing databases primarily offer 2D video sequences, limiting descriptions of 3D spatial information related to micro‐expressions. Here, a new micro‐expression database is proposed, which contains 2D image sequences and corresponding 3D point cloud sequences. These samples were classified using both an objective method based on the facial action coding system and a non‐objective emotion classification method that considers video contents and participants’ self‐reports. A variety of feature extraction techniques are applied to 2D data, including traditional algorithms and deep learning methods. Additionally, a novel local curvature‐based algorithm is developed to extract 3D spatio‐temporal deformation features from the 3D data. The authors evaluated the classification accuracies of these two features individually and their fusion results under leave‐one‐subject‐out (LOSO) and tenfold cross‐validation. The results demonstrate that fusing 3D features with 2D features results in improved recognition performance compared to using 2D features alone.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call