Abstract

Gesture recognition based on wearable sensors has received extensive attention in recent years. This paper proposes a gesture recognition model (CGR_ATT) based on Convolutional Neural Network (CNN) and Gated Recurrent Unit (GRU) fused attention mechanism to improve accuracy rate of wearable sensors. First, CNN serves as a feature extractor, learning features automatically from sensor data by performing multiple layers of convolution and pooling operations, capturing spatial features of gestures. Furthermore, a temporal modeling unit GRU is introduced to capture the temporal dynamics in gesture sequences. By controlling the information flow through gate mechanisms, it effectively handles the temporal relationships in sensor data. Finally, an attention mechanism is introduced to assign different weights to the hidden state of the GRU. By calculating the attention weights for each time period, the model automatically selects key time periods related to gesture movements. The GR-dataset proposed in this paper involves 910 sets of training parameters. The model achieves an ultimate accuracy of 97.57% . In compare with CLA-net, CLT-net, CGR, GRU, LSTM and CNN, the experimental results demonstrate that the proposed method has superior accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.