Abstract

Human Computer Interaction keeps moving toward interfaces which are more natural and intuitive to use, in comparison to traditional keyboard and mouse. Hand gestures are an important modality for human computer interaction (HCI) [1]. Compared to many existing interfaces, hand gestures have the advantages of being easy to use, natural, and intuitive. Successful applications of hand gesture recognition include computer games control [2], human-robot interaction [3], and sign language recognition [4], to name a few. Vision-based recognition systems can give computers the capability of understanding and responding to hand gestures. The aim of this technique is the proposal of a real time vision system for its application within visual interaction environments through hand gesture recognition, using general-purpose hardware and low cost sensors, like a simple personal computer and an USB web cam, so any user could make use of it in his office or home. The basis of our approach is a fast segmentation process to obtain the moving hand from the whole image, which is able to deal with a large number of hand shapes against different backgrounds and lighting conditions, and a recognition process that identifies the hand posture from the temporal sequence of segmented hands. The use of a visual memory (Stored database) allows the system to handle variations within a gesture and speed up the recognition process through the storage of different variables related to each gesture. In this paper, we have successfully implemented a vision-based system that can interpret a user's gestures in real time with Speech Application Programming Interface (SAPI) to control windows O.S. actions like single click, double click, forward, backward, media player, notepad, calculator.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call