Abstract

The main core purpose of artificial emotional intelligence is to recognize human emotions. Technologies such as facial, semantic, or brainwave recognition applications have been widely proposed. However, the abovementioned recognition techniques for emotional features require a large number of training samples to obtain high accuracy. Human behaviour pattern can be trained and recognized by the continuous movement of the Spatial Temporal Graph Convolution Network (ST-GCN). However, this technology does not distinguish between the speed of delicate emotions, and the speed of human behaviour and delicate changes of emotions cannot be effectively distinguished. This research paper proposes Spatial Temporal Variation Convolutional Network training for human emotion recognition, using skeleton detection technology to calculate the degree of skeleton point change between consecutive actions and using the nearest neighbour algorithm to classify speed levels and train the ST-GCN recognition model to obtain the emotional state. Application of the speed change recognition ability of the Spatial Temporal Variation Graph Convolution Network (STV-GCN) to artificial emotional intelligence calculation makes it possible to efficiently recognize the delicate actions of happy, sad, fear, and angry in human behaviour. The STV-GCN technology proposed in this paper is compared with ST-GCN and can effectively improve the recognition accuracy by more than 50%.

Highlights

  • Realization of human-emotion recognition applications in open fields such as transportation systems and metropolitan squares can avoid possible dangerous conflicts

  • 2) EXPERIMENTAL RESULTS Since the Spatial Temporal Graph Convolution Network (ST-Graph Convolution Network (GCN)) architecture cannot identify and classify the same human emotional actions at different speeds, this problem can be solved efficiently by using the action speed classification algorithm proposed in this research paper

  • The ST-GCN identification model obtained from the experimental results cannot accurately classify the test samples of the above three different speeds because the slow action provides more continuous details of the action, as shown in Figure 9, and the other two actions cannot be accurately classified by the ST-GCN architecture

Read more

Summary

Introduction

Realization of human-emotion recognition applications in open fields such as transportation systems and metropolitan squares can avoid possible dangerous conflicts. Recognition of human emotions can be done through changes in human facial features or delicate continuous movements [1]–[5]. Deep-learning image, speech, and brain-wave recognition technologies still need to work hard to recognize the delicate changes in human emotions. Since delicate changes of human facial features such as happy, angry, fear, and sad emotions require a large amount of data to be collected for image recog-. With regard to the differences in expression of happy, angry, fear, and sad emotions through human language, semantic speech analysis needs to solve the problems of cultural variation and sound source noise filtering to obtain reliable identification results. Relevant literature [7] uses continuous walking actions related to human emotions for data set construction and recognition model training, but currently there is no behavioural distinction regarding the speed of delicate emotional movements, which results in ineffective

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call