Abstract
Sign Language is a visual spatial language used by deaf and dumb community to convey their thoughts and ideas with the help of hand gestures and facial expressions. This paper proposes a novel 3D stroke based representation of dynamic gestures of Sign Language Signs incorporating local as well as global motion information. The dynamic gesture trajectories are segmented into strokes or sub-units based on Key Maximum Curvature Points (KMCPs) of the trajectory. This new representation has helped us in uniquely representing the signs with fewer number of key frames. We extract 3D global features from global trajectories using a scheme of representing strokes as 3D codes, which involves dividing strokes into smaller units (stroke subsegment vectors or SSVs), and representing them as belonging to one of the 22 partitions. These partitions are obtained using a discretisation procedure which we call an equivolumetric partition (EVP) of sphere. The codes representing the strokes are referred to as an EVP code. In addition to global hand motion and local hand motion, facial expressions are also considered for non-manual signs to interpret the meaning of words completely. In contrast to existing methods, our method of stroke based representation has less expensive training phase since it only requires the training of key stroke features and stroke sequences of each word.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.