Abstract

Due to the wide application of human-computer interaction, gesture recognition technology has received more and more attention in recent years. At present, the most common human-computer interaction systems often do not have hand gesture interaction functions. Therefore, on the basis of the traditional interactive system function, this paper adds a depth-image capture device to make the system interactive function with bare hands reality. The whole interactive system mainly consists of three modules: preprocessing, hand gesture detection and recognition, tracking and interaction. The preprocessing module is mainly responsible for detecting and locating the interaction area. The gesture detection and recognition module are mainly responsible for detecting and identifying gestures appearing in the interaction area. The tracking and interaction module perform tracking of user gestures through Kalman tracking and operates virtual hardware according to the gesture recognition result to realize human-computer interaction function. The system has been tested in complex environments, showing robustness to lighting changes and complex backgrounds. By using CPU-GPU parallel computing, the average processing speed is achieved more than 50 fps. The proposed system uses Kinect camera and using Python open source tools for image and video processing. This paper establishes a gesture interaction system with Kinect camera, and controls the virtual hardware through gestures to realize real-time human-computer interaction in complex background.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call