Abstract

This work fits into the context of the interpretation of automatic gestures based on computer vision. The aim of our work is to transform a conventional screen in a surface that allows the user to use his hands as pointing devices. These can be summarized in three main steps. Hand detection in a video, monitoring detected hands and conversion paths made by the hands to computer commands. To realize this application, it is necessary to detect the hand to follow. A classification phase is essential, at the control part. For this reason, we resorted to the use of a neuro-fuzzy classifier for classification and a pattern matching method for detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call