Abstract

The traditional human-computer interaction is mainly through the mouse, keyboard, remote control, and other peripheral equipment electromagnetic signal transmission. This paper aims to build a visual human-computer interaction system through a series of deep learning and machine vision models, so that people can achieve complete human-computer interaction only through the camera and screen. The established visual human-computer interaction system mainly includes the function modes of three basic peripherals in human-computer interaction: keyboard, mouse (X-Y position indicator), and remote control. The convex hull method was used to switch between these three modes. After issuing the mode command, Gaussian mixture was used to quickly identify the moving human body to narrow the scope of our image processing. Subsequently, finger detection in human body was realized based on the Faster-RCNN-ResNET50-FPN model structure, and realized the function of moving mouse and keyboard through the relationship between different fingers. At the same time, the recognition of human body posture was done by using MediaPipe BlazePose, and the action classification models were established through the Angle between body movements so as to realize the control function of remote control. In order to ensure the real-time performance of the interactive system, according to the characteristics of different data processing processes, CPU and GPU computing power resources are used to cross-process images to ensure the real-time performance. The recognition accuracy of the human-computer interaction system is above 0.9 for the key feature points of human body, and above 0.87 for the recognition accuracy of four kinds of command actions. It is hoped that vision-based human-computer interaction will become a widely used interaction mode in the future.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call