Abstract

Gesture recognition, which plays an important role to understand meaningful movements of human bodies, is one of the most effective approaches for humans to interact. Sign language is a fundamental and innate means of communication for hearing-impaired individuals. Though significant progress has been made, the state-of-the-art gesture recognition methods yield week performance for conditions with dynamic gestures in videos. Thus, robust gesture recognition remains a challenging issue because of many barriers of gesture-irrelevant factors. The key to robust gesture recognition is to learn effective and concise spatiotemporal information. Inspired by the great promise of the convolutional neural network (CNN) and its breakthroughs, we introduce an approach for identifying static alphabet gestures in the American Sign Language (ASL). The proposed CNN-based approach has been developed to classify letters of the alphabet from A to Z. It composes of three phases: a preprocessing phase for extracted of the region of interest, a feature extraction, and a classification phase. The performance of proposed gesture recognition approach is evaluated on the common ASL dataset and it achieves 94.83% accuracy, which is good enough to develop a strong translator from gesture-based ASL to spoken language as it is capable to handle a variety of 24 hand gestures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call