Abstract

Recognition of non-verbal gestures is essential for robots to understand a user's state and intention in a Human-Robot Interaction (HRI) scenario. In this paper a multi-modal system is proposed to recognize a user's hand gestures and estimate body poses from the robot's viewpoint only. A range camera is employed to derive the depth data at a high frame rate. Depth data is useful for image segmentation, objects detection and localization in 3D spaces. A pair of stereo cameras is used to sense the user's head gestures and eye gaze direction, which provide useful information about the user's attention direction. Both hand shapes and hand trajectories are recognized. Full configurations of body poses are estimated using a model-based algorithm. Poses are tracked by a Particle Filter method, and refined by a gradient-based searching method in the neighborhood of the particles which have top largest weights.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call