Abstract

Sign language is the communication mode used by hearing-impaired people. The development of tools for recognition of sign language is important to improve the interaction of these people in the society. This work proposes a comparison between models of gesture recognition based on instrumented glove with and without handcrafted feature extraction approach, which is a classical method for pattern recognition. For manual feature extraction approach, Forward Sequential Feature Selection (SFS) method was employed to select the best features and different classifiers were analyzed. For the non-manual feature extraction approach, Recurrent Neural Networks as Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU) models and, additionally, a proposed Ensemble GRU model were employed. The proposed Ensemble GRU is composed of three layers, and each layer is responsible to acquire knowledge of gestures’ periods: initial, gesture, and final periods. The classifiers LSTM and GRU can receive the data segments of the gestures directly, without manual feature extraction. A dataset composed by 26 gestures from alphabet of Brazilian Sign Language was explored. In the handcrafted feature extraction approach, the best result was of 88.5% accuracy, obtained with K-Nearest Neighbors model. On the opposite model, GRU and Ensemble GRU models reached 94.6% and 96.8% accuracy. It can be observed that non-manual feature extraction approach can be implemented for gesture recognition based on instrumented glove and the results can be better than conventional methods. Furthermore, the proposed Ensemble GRU model presented better result than the standard models of GRU and LSTM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call