Abstract

In this paper we present a novel framework for posture and gesture recognition performed by human articulated limbs, focusing the exposition on a notable case study: the human hand. The method relies on visual cues located on the surface of the body which are to be 3D reconstructed by means of stereo. A simple hand geometrical model and a visibility ordering scheme provide the regularization information needed to solve the posture recognition problem (the instantaneous hand configuration) in the presence of occlusions. These recognized postures feed a gesture (sequence of postures with semantic content) recognition system based on a clustering of postures paths and on a Viterbi decoder which chooses the one with the maximum likelihood. Some results are presented for a simple vocabulary consisting of sequences of signs taken from the Spanish hearing-impaired alphabet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call