Abstract

Deaf people face many challenges to communicate with the hearing world. Many studies and industrial solutions came up with interpretation from sign language to normal text but most of them are limited either to static images for letters or static animated character plays the word in motion. Therefore, this research worked on enhancing the feasible algorithms that have been used like image detection, image processing techniques and image translation methods. The contribution consists of two parts, the first one is using our data set, and the second one is proposing a new solution by extracting SURF features after using image filtering based on the existing methods to accelerate the translation process for the long sentences. Experimental results over our data set report improved accuracy compared with other studies in terms of efficiency time and recognition rate of sign language character recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call