Abstract

Sign languages are the natural way Deafs use to communicate with other people. They have their own formal semantic definitions and syntactic rules and are composed by a large set of gestures involving hands and head. Automatic recognition of sign languages (ARSL) tries to recognize the signs and translate them into a written language. ARSL is a challenging task as it involves background segmentation, hands and head posture modeling, recognition and tracking, temporal analysis and syntactic and semantic interpretation. Moreover, when real-time requirements are considered, this task becomes even more challenging. In this paper, we present a study of real time requirements of automatic sign language recognition of small sets of static and dynamic gestures of the Brazilian Sign Language (LIBRAS). For the task of static gesture recognition, we implemented a system that is able to work on small sub-sets of the alphabet - like A,E,I,O,U and B,C,F,L,V - reaching very high recognition rates. For the task of dynamic gesture recognition, we tested our system over a small set of LIBRAS words and collected the execution times. The aim was to gather knowledge regarding execution time of all the recognition processes (like segmentation, analysis and recognition itself) to evaluate the feasibility of building a real-time system to recognize small sets of both static and dynamic gestures. Our findings indicate that the bottleneck of our current architecture is the recognition phase.

Highlights

  • A wave, a jump, a contortion, a smile, a desperation expression or any other body motion are people’s reactions to some happenings and are means of communication

  • In this paper we focus on static and dynamic gesture recognition using a software architecture based on Artificial Neural Networks and Hidden Markov Models

  • Methodology: tools, experiments and Results In this paper we focus on the real-time requirements to perform static and dynamic gesture recognition

Read more

Summary

Introduction

A wave, a jump, a contortion, a smile, a desperation expression or any other body motion are people’s reactions to some happenings and are means of communication. Bauer proposed a HMM based system to recognize 97 different gestures of the German Sign Language (GSL) and the system presented a recognition rate of 91.7%. To train new models the training and testing set must be representative, including possible variations of user input To ease this task the software is able to collect samples in real-time using a sequential trigger that snaps the images of the postures and organize them in folders by gesture name. In this context we developed a simple algorithm to segment user hands called Virtual Wall that sets a depth threshold that works like an invisible wall in front of the user The position of this wall is calculated using the users CoM in the following way: VWdepth = CoMdepth − α (5). After recalculating ROIheight, the ROIwidth needs to be adjusted so the resulting ROI keeps being the smallest rectangle enclosing the blob of the hand

The Experiments with Static Gestures
Findings
Conclusions and Future work

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.