Abstract

Emotion recognition has drawn consistent attention from researchers recently. Although gesture modality plays an important role in expressing emotion, it is seldom considered in the field of emotion recognition. A key reason is the scarcity of labeled data containing 3D skeleton data. Some studies in action recognition have applied graph-based neural networks to explicitly model the spatial connection between joints. However, this method has not been considered in the field of gesture-based emotion recognition, so far. In this work, we applied a pose estimation based method to extract 3D skeleton coordinates for IEMOCAP database. We propose a self-attention enhanced spatial temporal graph convolutional network for skeleton-based emotion recognition, in which the spatial convolutional part models the skeletal structure of the body as a static graph, and the self-attention part dynamically constructs more connections between the joints and provides supplementary information. Our experiment demonstrates that the proposed model significantly outperforms other models and that the features of the extracted skeleton data improve the performance of multimodal emotion recognition.

Highlights

  • Multimodal emotion recognition has attracted a lot of attention due to its wide range of application scenarios

  • We construct a skeleton enhanced emotion recognition network (SERN), which integrates text and audio information with the features extracted by the self-attention enhanced spatial temporal graph convolutional network (See Figure 6)

  • Considering the unbalance of the samples, the unweighted average recall was used to evaluate the model by treating each category

Read more

Summary

Introduction

Multimodal emotion recognition has attracted a lot of attention due to its wide range of application scenarios. Research using speech signals, textual transcriptions and facial expressions mostly evaluate their models on large open-source multimodal emotional benchmark datasets, such as the interactive emotional dyadic motion capture database (IEMOCAP, over 10000 samples) [6] These databases do not contain skeleton data representing the gesture modality, which makes them difficult to use in gesture emotion recognition. A strong connection between these joints is likely necessary, but the fixed graph structure does not guarantee that the network can capture the appropriate dependency To solve these problems, we make the following contributions: (i) we extract 3D skeleton movement data from raw video based on pose estimation and the method can be used to expand existing databases to alleviate the lack of labeled data. The performance significantly exceeds that of the bimodal model using only audio and text information, which shows the effectiveness of the extracted modality

Emotion Recognition
Gesture-Based Emotion Recognition
Graph Neural Networks
Skeletal Data Extraction
Human Pose Estimation
Data Preprocessing
Skeleton Graph Construction
Self-Attention Enhanced Spatial Graph Convolutional Layer
Self-Attention Enhanced Spatial Temporal Graph Convolutional Network
Two-Stream Architecture
Multimodal Emotion Recognition Network
Dataset
Feature Extraction and Experiment Setting
Results
Effect of the Preprocessing
Gating Mechanism
Effect of the Bone Information
Multimodal Analysis
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call