Abstract
Currently, the majority of solutions for human interactive action recognition typically rely on machine vision methods, thus leading to limited application scenarios for accurate recognition. In this paper, we proposed a novel two-level multi-head attentional human interaction action recognition model based on inertial measurement units. Considering that individual actions may contribute differently to interaction actions, the overall model of weighting individual actions to identify interaction actions is divided into individual scenes and interaction scenes. A multi-head attention mechanism is added to the model to obtain action weighting information in both scenes. We combine a bi-directional gated recurrent unit (BiGRU) and an improved convolutional neural network constructs to fully capture spatiotemporal features. The experimental results show that the established model can accurately recognize seven interactive actions with an average recognition accuracy of 98.73%, which verifies the excellent performance of the proposed method.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have