Abstract

Deaf and hearing impaired people use their hand as a tongue to convey their thoughts by performing descriptive gestures that form the sign language. A sign language recognition system is a system that translates these gestures into a form of spoken language. Such systems are faced by several challenges, like the high similarities of the different signs, difficulty in determining the start and end of signs, lack of comprehensive and bench marking databases. This paper proposes a system for recognition of Arabic sign language using the 3D trajectory of hands. The proposed system models the trajectory as a polygon and finds features that describes this polygon and feed them to a classifier to recognize the signed word. The system is tested on a database of 100 words collected using Kinect. The work is compared with other published works using publicly available dataset which reflects the superiority of the proposed technique. The system is tested for both signer-dependent and signer-independent recognition.

Highlights

  • Communicating thoughts and feelings is an essential need for human beings

  • Sign language recognition systems tries to fill this gap by exploiting the advanced technologies to automatically translate signed language to a form of spoken language such as text or speech

  • These features are used to train Extreme Learning Machine (ELM) and 82.8% accuracy is reported on a limited database of 8 words from the Chinese sign language

Read more

Summary

INTRODUCTION

Communicating thoughts and feelings is an essential need for human beings. Hearing disabilities hinder the natural speech based communication. To communicate with each other and with speaking people, deaf has invented nonverbal languages that use descriptive gestures to convey their thoughts. To communicate with deaf, speaking people need skilled professional translators that knows the spoken and signed languages. These skilled translators are few and can‘t be available all the time. This work proposes to use the 3D trajectory of hands to recognize signs. The proposed system is composed of three stages: Preprocessing, Features representation, and Classification. The feature representation stage builds a features vector that describes this polygon These features are used to train and test different classifiers to recognize the signs in the third stage. Propose a trajectory based sign language recognition system.

RELATED WORKS
PREPROCESSING
FEATURES REPRESENTATION
Polygon Description
Positional Trajectory Feature
CLASSIFICATION TECHNIQUES
EXPERIMENTAL RESULTS
Arabic Sign Language Dataset
Effect of Trajectory Compression
Fine Tuning EnsembleSupspaceKNN Classifier
Evaluation of the Proposed Features
Comparison with Published Work
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.