Abstract
Sign language is the way of communication among deaf and hearing impaired community and consist of a combination of hand movements and facial expressions. Successful efforts in computer vision-based research within the last years paved the path for first automatic sign language recognition systems. However, unresolved challenges, such as cultural differences in the sign languages of the world, lack of the representative databases for model training, relatively small size of the region-of-interest, issues due to occlusion, etc. keep automatic sign language recognition reliability still far from human-level performance, especially for the Russian sign language. To address this issue, we present a framework and an automatic system for one-handed gestures of Russian Sign Language (RSL) recognition. The developed system supports both online and offline modes and is able to recognize 44 classes of RSL one-handed gestures with almost 70% of accuracy. The system is based on color-depth Kinect v2 sensor and trained on TheRuSLan database using a combination of state-of-the-art deep learning approaches. The future research will focus on extracting additional features, expanding the data set, and increasing the amount of recognizable gestures with two-handed gestures. The developed vision-based RSL recognition system is meant as an auxiliary system for deaf and hearing impaired people.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.