Abstract

In the quest for human-robot interaction (HRI), leading to the development of emotion recognition, learning, and analysis capabilities, robotics plays a significant role in human perception, attention, decision-making, and social communication. However, the accurate recognition of emotions in HRI remains a challenge. This is due to the coexistence of multiple sources of information in utilizing multimodal facial expressions and head poses as multiple convolutional neural networks (CNN) and deep learning are combined. This research analyzes and improves the robustness of emotion recognition, and proposes a novel approach that optimizes traditional deep neural networks that fall into poor local optima when optimizing the weightings of the deep neural network using standard methods. The proposed approach adaptively finds the better weightings of the network, resulting in a hybrid genetic algorithm with stochastic gradient descent (HGASGD). This hybrid algorithm combines the inherent, implicit parallelism of the genetic algorithm with the better global optimization of stochastic gradient descent (SGD). An experiment shows the effectiveness of our proposed approach in providing complete emotion recognition through a combination of multimodal data, CNNs, and HGASGD, indicating that it represents a powerful tool in achieving interactions between humans and robotics. To validate and test the effectiveness of our proposed approach through experiments, the performance and reliability of our approach and two variants of HGASGD FER are compared using a large dataset of facial images. Our approach integrates multimodal information from facial expressions and head poses, enabling the system to recognize emotions better. The results show that CNN-HGASGD outperforms CNNs-SGD and other existing state-of-the-art methods in terms of FER.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call