Abstract

Gesture recognition becomes a thriving research area in modern human motion recognition systems. The intensification of demands on efficient interactive human-machine-interface systems, commercial objectives, and many other factors contributes to fuel this revival dynamics. Understanding human gestures becomes essential for prevention and health monitoring applications. In particular, analyzing hand gestures is of paramount importance in personalized healthcare-related applications to help practitioners providing more qualitative assessments of subject's pathologies, such as Parkinson's diseases. This work proposes a novel deep neural network approach to forecast future gestures from a given sequence of hand motion using a wearable capacitance sensor of an innovative gesture recognition hardware system. To do this, we use an attention-based recurrent neural network to capture the temporal features of hand motion to unveil the underlying pattern between the gesture and these sequences. While the attention layers capture patterns from the weights of the short term, the gated recurrent unit (GRU) neural network layer learns the inherent interdependency of long-term hand gesture temporal sequences. The efficiency of the proposed model is evaluated with respect to cutting-edge work in the field using several metrics. Note to Practitioners-In this article, the problem of human hand gesture recognition is analyzed using deep learning techniques. The proposed model uses input historical motion sequences collected from a wearable capacitance sensor to predict hand gestures. The model leverages the intrinsic correlation of motion sequences and extracts the salient part of the sequences by taking into consideration their temporal, complex, and nonlinear features. The approach studies the effect of different lengths of historical motion sequences in prediction outcomes. This allows for avoiding using cumbersome data collection, heavy data treatment, and high computational cost. The model performance is trained and assessed on real-world data by performing comparisons with alternative approaches, including well-known classifiers. The model yields very encouraging results and demonstrates that the proposed approach is quite competitive as it can reproduce typical activity trends for important channels. The present findings could help in the development of intelligent wearable devices for predicting hand gestures using a limited number of channels. This work could also help practitioners to provide a more qualitative appraisal of patients suffering from different pathologies such as Parkinson's diseases to personalized healthcare-related applications and to develop wearable gesture recognition devices on a large scale.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.