Abstract

In this paper, the authors design a wearable gesture recognition system that identifies the alphabetic gestures in the American Sign Language (ASL) efficiently, requiring minimal user calibration. The system uses a heterogeneous collection of sensors, viz., contact, flex and inertial, placed strategically on the hand to record movements and points of contact involved during gesture signing. In order to minimize user calibration, the authors propose a novel approach called as State-based gesture modelling, which requires only 7 gestures to be signed by a user to create a State-based model for all 26 alphabetic gestures in ASL for that user. A multi-stage gesture recognition algorithm, which uses the user-calibrated State-based model, is employed for gesture identification. The system was extensively tested for 2 trained and 1 amateur ASL signers for up to 150 times per gesture giving maximum efficiencies of 92% and 81% respectively, thus indicating the robustness of the system. The use of an elegant algorithm along with simple off-the-shelf sensors enabled the implementation of the entire system on an ATmega 328P micro-controller, resulting in a cost of 25 USD for a laboratory stage prototype.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call