Abstract

Robots can mimic humans, including recognizing faces and emotions. However, relevant studies have not been implemented in real-time humanoid robot systems. In addition, face and emotion recognition have been considered separate problems. This study proposes a combination of face and emotion recognition for real-time application in a humanoid robot. Specifically, face and emotion recognition systems were developed simultaneously using convolutional neural network architectures. The model was compared to well-known architectures, such as AlexNet and VGG16, to determine which is better for implementation in humanoid robots. Data used for face recognition were primary data taken from 30 electrical engineering students after preprocessing, resulting in 18,900 data points. Emotion data of surprise, anger, neutral, smile, and sad were taken from the same respondents and combined with secondary data for a total of 5,000 data points for training and testing. The test was carried out in real time on a humanoid robot using the two architectures. The face and emotion recognition accuracy was 85% and 64%, respectively, using the AlexNet model. VGG16 yielded recognition accuracies of 100% and 73%, respectively. The proposed model architecture showed 87% and 67% accuracies for face recognition and emotion recognition, respectively. Thus, VGG16 performs better in recognizing faces as well as emotions, and it can be implemented in humanoid robots. This study also provides a method for measuring the distance between the recognized object and robot with an average error rate of 2.52%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call