Abstract

Visual communication is important for a deft and/or mute person. It is also one of the tools for the communication between human and machines. In this paper, we develop an automatic Thai sign language translation system that is able to translate sign language that is not finger-spelling sign language. In particular, we utilize Scale Invariant Feature Transform (SIFT) to match a test frame with observation symbols from keypoint descriptors collected in the signature library. These keypoint descriptors are computed from several keyframes that are recorded at different times of day for several days from five subjects. Hidden Markov Models (HMMs) are then used to translate observation sequences to words. We also collect Thai sign language videos from 20 subjects for testing. The system achieves approximately 86–95% for signer-dependent on the average, 79.75% for signer-semi-independent (same subjects used in the HMM training only) on the average and 76.56% for signer-independent on the average. These results are from the constrained system in which each signer wears a shirt with long sleeves in front of dark background. The unconstrained system in which each signer does not wear a long-sleeve shirt in front of various natural backgrounds yields a good result of around 74% on the average on the signer-independent experiment. The important feature of the proposed system is the consideration of shapes and positions of fingers, in addition to hand information. This feature provides the system ability to recognize the hand sign words that have similar gestures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call