Abstract

One of the most common ways of communication in deaf community is sign language recognition. This paper focuses on the problem of recognizing Arabic sign language at word level used by the community of deaf people. The proposed system is based on the combination of Spatio-Temporal local binary pattern (STLBP) feature extraction technique and support vector machine (SVM) classifier. The system takes a sequence of sign images or a video stream as input, and localize head and hands using IHLS color space and random forest classifier. A feature vector is extracted from the segmented images using local binary pattern on three orthogonal planes (LBP-TOP) algorithm which jointly extracts the appearance and motion features of gestures. The obtained feature vector is classified using support vector machine classifier. The proposed method does not require that signers wear gloves or any other marker devices. Experimental results using Arabic sign language (ArSL) database contains 23 signs (words) recorded by 3 signers show the effectiveness of the proposed method. For signer dependent test, the proposed system based on LBP-TOP and SVM achieves an overall recognition rate reaching up to 99.5%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call