Abstract

Emotion recognition is one of the trending research fields. It is involved in several applications. Its most interesting applications include robotic vision and interactive robotic communication. Human emotions can be detected using both speech and visual modalities. Facial expressions can be considered as ideal means for detecting the persons' emotions. This paper presents a real-time approach for implementing emotion detection and deploying it in the robotic vision applications. The proposed approach consists of four phases: preprocessing, key point generation, key point selection and angular encoding, and classification. The main idea is to generate key points using MediaPipe face mesh algorithm, which is based on real-time deep learning. In addition, the generated key points are encoded using a sequence of carefully designed mesh generator and angular encoding modules. Furthermore, feature decomposition is performed using Principal Component Analysis (PCA). This phase is deployed to enhance the accuracy of emotion detection. Finally, the decomposed features are enrolled into a Machine Learning (ML) technique that depends on a Support Vector Machine (SVM), k-Nearest Neighbor (KNN), Naïve Bayes (NB), Logistic Regression (LR), or Random Forest (RF) classifier. Moreover, we deploy a Multilayer Perceptron (MLP) as an efficient deep neural network technique. The presented techniques are evaluated on different datasets with different evaluation metrics. The simulation results reveal that they achieve a superior performance with a human emotion detection accuracy of 97%, which ensures superiority among the efforts in this field.

Highlights

  • Recognition of human emotions is a vital phase, which is involved in several applications such as augmented and virtual reality [1, 2], advanced driver assistance systems [3], human computer interaction [4], and security systems [5,6,7]

  • Python 3.9 is used as the development environment. e OpenCV 4.5 and SRGAN libraries are used for image preprocessing

  • E training time is recorded based on the average of five runs. e proposed model is evaluated using two different datasets: CK+ (6 classes) and Japanese Female Facial Expression (JAFFE) (6 classes), which are benchmark datasets for facial expression classification

Read more

Summary

Introduction

Recognition of human emotions is a vital phase, which is involved in several applications such as augmented and virtual reality [1, 2], advanced driver assistance systems [3], human computer interaction [4], and security systems [5,6,7]. Erefore, the students can learn better using this approach Such information obtained through emotion analysis is useful in monitoring of the overall mood of a group of persons to identify any destructive events [13]. Erefore, the facial emotion analysis can be a dependable approach to recognize human emotions for HRI applications. Is paper presents a real-time study for emotion detection and deployment in robotic vision applications. E proposed approach consists of four phases: preprocessing, feature extraction and selection, feature decomposition, and classification. (1) A novel fast and robust emotion detection framework for robotic vision applications is proposed. (4) Emotion classification is performed depending on various machine learning techniques.

Related Work
Dataset Description e proposed models are evaluated on three datasets
Experimental Results
Method
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.