Abstract

In this paper, we present a feature extraction approach for facial expressions recognition based on distance importance scores between the coordinates of facial landmarks. Two audio-visual speech databases (CREMA-D and RAVDESS) were used in the research. We conducted experiments using the Long Short-Term Memory Recurrent Neural Network model in a single corpus and cross-corpus setup with different length sequences. Experiments were carried out using different sets and types of visual features. An accuracy of facial expression recognition was 79.1% and 98.9% for the CREMA-D and RAVDESS databases, respectively. The extracted features provide a better recognition result compared to other methods based on the analysis of facial graphical regions.

Highlights

  • Facial expressions are an important channel of nonverbal communication, so interest in automatic recognition of human emotions by facial expressions increases every year

  • This is due to the fact that smart emotion recognition technologies are in demand and are introduced around the world, for example, automatic facial expression recognition systems are widely used in medicine [1], psychology [2], education [3], fraud detection [4], driver assistance systems [5], etc

  • We have studied various feature extraction methods calculated using coordinates of facial landmarks

Read more

Summary

Introduction

Facial expressions are an important channel of nonverbal communication, so interest in automatic recognition of human emotions by facial expressions increases every year. This is due to the fact that smart emotion recognition technologies are in demand and are introduced around the world, for example, automatic facial expression recognition systems are widely used in medicine [1], psychology [2], education [3], fraud detection [4], driver assistance systems [5], etc. More research has focused on the analysis of facial expressions in a video [6,7,8,9] since video can transmit a change in facial expressions over time.

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call