Abstract

Abstract Millions of people throughout the world describe themselves as being deaf. Some of them suffer from severe hearing loss and consequently use an alternative manner with which to communicate with society by means of either written or visual language. There are several sign languages capable of dealing with such a need. Nonetheless, a communication gap still exists even when using such languages, since only a small fraction of the population is able to use them. Over the last few years, due to the increasing need for universal accessibility when using computational resources, gesture recognition has been widely researched. Thus, in an attempt to reduce this communication gap, our approach proposes a computational solution in order to translate static gesture symbols into text symbols, through computer vision, without the use of hand sensors or gloves. In order to guarantee the highest quality, with emphasis on the reliability of the system and real-time translation, we have developed an approach based on the Extreme Learning Machine (ELM) pattern recognition algorithms fully implemented in hardware, and have assessed it to measure these two metrics. Hardware components were designed in order to perform the best image processing and pattern recognition tasks used within the project. As a case study, and so as to validate the technique, a recognition system for the Brazilian Sign Language (LIBRAS) was implemented. Besides ensuring that this approach could be used for any static hand gesture symbol recognition, our main goal was to guarantee fast, reliable gesture recognition for communication between humans. Experimental results have demonstrated that the system is able to recognize LIBRAS symbols with an accuracy of 97%, a response time of 6.5ms per letter recognition, and using only 43% (about 64,851 logic elements) of the FPGA area.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call