Abstract

People with hearing impairment use sign language for communication. They use hand gestures to represent numbers, letters, words and sentences, which allows them to communicate among themselves. The problem arises when they need to interact with other people. An automation system that can convert sign language to text will make the interaction easier. Recently, many such systems for sign language recognition have been developed. But most of them were executed using laptop and computers, which are impractical to carry due to their weight and size. This article is based on the design and implementation of an Android application which converts the American Sign Language to text, so that it can be used anywhere and anytime. Image is captured by the smart phone camera and skin segmentation is done using YCbCr systems. Features are extracted from the image using HOG and classified to recognize the sign. The classification is done using Support Vector Machine (SVM).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call