Abstract

Sign language recognition is a very challenging research area. In this proposed system, a well trained computer system is used to recognize static hand gestures representing linguistic words. The main aim of the paper is conversion of linguistic sign languages into text and speech form. Recognized sign are also translated into Tamil and Hindi languages. It contains three processes of work. First process is pre-processing, in which the obtained images are processed through the steps like segmentation, resize, and gray conversion. Second process is region-based analysis which exploits both boundary and interior pixels of an object. Solidity, perimeter, convex hull, area, major axis length, minor axis length, eccentricity, orientation are some of the shape descriptors used as features in this process. The features derived are used to train the binary classifier first; secondly the testing images are given for classification. Knn classifiers are used for classification which provides a good result with less computation time for larger datasets. Since, the system handled a binary classifier it performed a one-versus-all kind of classification. PCNN (Pulse Coupled Neural Network) is used for pattern recognition. Third process is the hand gesture recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.