Abstract

Hand gesture recognition is an essential way for Human-Robot Interaction (HRI). Sign language is the most intuitive and direct way to communication for impaired or disabled people. Furthermore, emotional interaction with human beings is desirable for robots. In this talk, hand gesture recognition and emotion recognition of an integrated system will be described which has ability to track multiple people at the same time, to recognize their facial expressions, and to identify social atmosphere. Consequently, robots can easily recognize hand gestrure and facial expression with emotion variations of different people, and can respond properly. A combining hand gesture recognition algorithm which combines two distinct recognizers has been studied. These two recognizers collectively determine the hand's gesture via a process called combinatorial approach recognizer (CAR) equation. As for the facial expression recognition scheme, we fuse feature vectors based approach (FVA) and differential-active appearance model features based approach (DAFA) to obtain not only apposite positions of feature points, but also more information about texture and appearance. Experimental results through video demonstration will be included in this talk that the proposed algorithms can recognize hand gesture and facial expressions accurately and robustly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call