Abstract
In recent years, the development of algorithms that assist in communicate with deaf people is an important challenge. The development of automatic systems to translate sign language is a current research topic. However, this involves several processes that range from video capture, pre-processing to identification or classification of the signal. The development of systems capable of extracting discriminative features that enhance the power of generalization of a classifier is even a very challenging problem. The meaning of a sign is the combination of the hand movement, hand shape, and the point of contact of the hand in the body. This paper presents a method to detect and translate hand gestures. First, we obtain 15 frames per word, obtaining 3 regions of interest (hands and face) from which we obtain geometric features. Finally we use several classifier techniques and present the experimental results.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.