Abstract
With the advances of wearable cameras, the user can record the first-person view videos for gesture recognition or even sign language recognition to help the deaf or hard of hearing people communicate with others. In this paper, we propose a purely vision-based sign language recognition system which can be used in complex background scene. We design an adaptive skin colour modelling method for hand segmentation so that the hand contour can be derived more accurately even when different users use our system in various light conditions. Four kinds of feature descriptors are integrated to describe the contours and the salient points of hand gestures, and support vector machine (SVM) is applied to classify hand gestures. Our recognition method is evaluated by two datasets: 1) the CSL dataset collected by ourselves in which images were captured in three different environments including complex background; 2) the public ASL dataset, in which images of the same gesture were captured in different lighting conditions. The proposed recognition method achieves acceptable accuracy rates of 100.0% and 94.0% for the CSL and ASL datasets, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.