Abstract

Human-Computer interaction (HCI) with gesture recognition is designed to recognize a number of meaningful human expressions, and has become a valuable and intuitive computer input technique. Hand gestures are one of the most intuitive and common forms of communication, and can communicate a wide range of meaning. Vision-based hand gesture recognition has received a significant amount of research attention in recent years. However, the field still presents a number of challenges for researchers. In the vision-based hand gesture interaction process between humans and computers, gesture interpretation must be performed quickly and with high accuracy. In this paper, a low-cost HCI system with hand gesture recognition is proposed. This system uses several vision techniques. Skin and motion detection is used for capturing the region-of-interest from the background regions. A connected component labeling algorithm is proposed to identify the centroid of an object. To identify the exact area of hand gesture, the arm area is removed with the aid of a convex hull algorithm. Moreover, a real-time demonstration system is developed, based on a single-camera mechanism which allows for the use of wearable devices. Simulation results show that the recognition rate is still high, although some interference is encountered in the simulated environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call