Abstract

Considering the continued drive of human needs besides the constant improvement of technology, it is convenient to develop techniques that enhance the communication between computers and humans in the most intuitive ways as possible. The possibility to automatically recognize human gestures using artificial vision among other kind sensorsallows to explore a whole range of interaction applications to control and interact with environments. Nowadays, most of approaches for gesture recognition using sensors agree in the use of vision, myography and movement devices applied to robotic, medical and industrial applications. In the context of this work, we study the principles of using both vision andbody contact sensing applied to automatic classification of a human gesture set. For this, two different approaches are evaluated: Feed-forward Neural Networks and Hidden Markov Models. These models are studied and implemented for the recognition up to eight different human hand gestures commonly applied in collaborative robotics tasks. In our tests,we conclude the effectiveness of combining the information of two different sort of devices for human gesture recognition reaching accuracy rates up to 95.05% for a whole proposed Hand-gesture set.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call