Abstract

Sign-language (SL) recognition, even though it has been under investigation for many years, remains a challenge in real practice. Sign language has become a perpetual panacea and is a very powerful tool for people with hearing and speech impediments to communicate their feelings and ideas to the world. It makes the process of integration between them and others smoother and less complicated. Sophisticated background and light conditions affect hand tracking and make SL detection extremely difficult. A web camera is used for simultaneous video sensor, based on which hand and body action can be tracked more accurately and easily. We use the HSV color algorithm to detect hand gestures and set the background black. Images go through a series of processing steps that include various computer recognition methods such as mask performance. And an interesting region in which, in our case, the touch of a hand is separated. The output elements are binary pixels for images. We are using the PCA net to extract the feature. We use the Convolutional Neural Network (CNN) to train and classify images. The system does not require the hand to be properly aligned with the camera or to use any special gestures or hand gloves.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call