Abstract

Different from traditional gestures, sign language gestures involve a lot of finger-level gestures without wrist or arm movements. They are hard to detect using existing motion sensors-based approaches. We introduce the first low-cost sign language gesture recognition system that can differentiate fine-grained finger movements using the Photoplethysmography (PPG) and motion sensors in commodity wearables. By leveraging the motion artifacts in PPG, our system can accurately recognize sign language gestures when there are large body movements, which cannot be handled by the traditional motion sensor-based approaches. We further explore the feasibility of using both PPG and motion sensors in wearables to improve the sign language gesture recognition accuracy when there are limited body movements. We develop a gradient boost tree (GBT) model and deep neural network-based model (i.e., ResNet) for classification. The transfer learning technique is applied to ResNet-based model to reduce the training effort. We develop a prototype using low-cost PPG and motions sensors and conduct extensive experiments and collect over 7000 gestures from 10 adults in the static and body-motion scenarios. Results demonstrate that our system can differentiate nine finger-level gestures from the American Sign Language with an average recognition accuracy over 98 percent.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call